00:00:00.002 Started by upstream project "autotest-nightly" build number 3890 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3270 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.138 using credential 00000000-0000-0000-0000-000000000002 00:00:00.139 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.175 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.003 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.015 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.026 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.026 > git config core.sparsecheckout # timeout=10 00:00:08.036 > git read-tree -mu HEAD # timeout=10 00:00:08.051 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.071 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.071 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:08.171 [Pipeline] Start of Pipeline 00:00:08.183 [Pipeline] library 00:00:08.184 Loading library shm_lib@master 00:00:08.184 Library shm_lib@master is cached. Copying from home. 00:00:08.198 [Pipeline] node 00:00:08.208 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.210 [Pipeline] { 00:00:08.218 [Pipeline] catchError 00:00:08.219 [Pipeline] { 00:00:08.234 [Pipeline] wrap 00:00:08.245 [Pipeline] { 00:00:08.254 [Pipeline] stage 00:00:08.257 [Pipeline] { (Prologue) 00:00:08.423 [Pipeline] sh 00:00:08.705 + logger -p user.info -t JENKINS-CI 00:00:08.725 [Pipeline] echo 00:00:08.727 Node: GP11 00:00:08.737 [Pipeline] sh 00:00:09.041 [Pipeline] setCustomBuildProperty 00:00:09.053 [Pipeline] echo 00:00:09.054 Cleanup processes 00:00:09.059 [Pipeline] sh 00:00:09.366 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.367 843862 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.383 [Pipeline] sh 00:00:09.675 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.675 ++ grep -v 'sudo pgrep' 00:00:09.675 ++ awk '{print $1}' 00:00:09.675 + sudo kill -9 00:00:09.675 + true 00:00:09.692 [Pipeline] cleanWs 00:00:09.703 [WS-CLEANUP] Deleting project workspace... 00:00:09.703 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.709 [WS-CLEANUP] done 00:00:09.714 [Pipeline] setCustomBuildProperty 00:00:09.732 [Pipeline] sh 00:00:10.018 + sudo git config --global --replace-all safe.directory '*' 00:00:10.115 [Pipeline] httpRequest 00:00:10.146 [Pipeline] echo 00:00:10.148 Sorcerer 10.211.164.101 is alive 00:00:10.159 [Pipeline] httpRequest 00:00:10.164 HttpMethod: GET 00:00:10.165 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.166 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.180 Response Code: HTTP/1.1 200 OK 00:00:10.181 Success: Status code 200 is in the accepted range: 200,404 00:00:10.181 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:20.209 [Pipeline] sh 00:00:20.493 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:20.510 [Pipeline] httpRequest 00:00:20.547 [Pipeline] echo 00:00:20.549 Sorcerer 10.211.164.101 is alive 00:00:20.557 [Pipeline] httpRequest 00:00:20.562 HttpMethod: GET 00:00:20.563 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:20.564 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:20.586 Response Code: HTTP/1.1 200 OK 00:00:20.586 Success: Status code 200 is in the accepted range: 200,404 00:00:20.586 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:19.974 [Pipeline] sh 00:01:20.255 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:22.799 [Pipeline] sh 00:01:23.082 + git -C spdk log --oneline -n5 00:01:23.083 719d03c6a sock/uring: only register net impl if supported 00:01:23.083 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:23.083 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:23.083 6c7c1f57e accel: add sequence outstanding stat 00:01:23.083 3bc8e6a26 accel: add utility to put task 00:01:23.094 [Pipeline] } 00:01:23.109 [Pipeline] // stage 00:01:23.117 [Pipeline] stage 00:01:23.119 [Pipeline] { (Prepare) 00:01:23.136 [Pipeline] writeFile 00:01:23.152 [Pipeline] sh 00:01:23.434 + logger -p user.info -t JENKINS-CI 00:01:23.446 [Pipeline] sh 00:01:23.727 + logger -p user.info -t JENKINS-CI 00:01:23.739 [Pipeline] sh 00:01:24.023 + cat autorun-spdk.conf 00:01:24.023 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.023 SPDK_TEST_NVMF=1 00:01:24.023 SPDK_TEST_NVME_CLI=1 00:01:24.023 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.023 SPDK_TEST_NVMF_NICS=e810 00:01:24.023 SPDK_RUN_ASAN=1 00:01:24.023 SPDK_RUN_UBSAN=1 00:01:24.023 NET_TYPE=phy 00:01:24.030 RUN_NIGHTLY=1 00:01:24.034 [Pipeline] readFile 00:01:24.060 [Pipeline] withEnv 00:01:24.062 [Pipeline] { 00:01:24.074 [Pipeline] sh 00:01:24.357 + set -ex 00:01:24.358 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.358 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.358 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.358 ++ SPDK_TEST_NVMF=1 00:01:24.358 ++ SPDK_TEST_NVME_CLI=1 00:01:24.358 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.358 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.358 ++ SPDK_RUN_ASAN=1 00:01:24.358 ++ SPDK_RUN_UBSAN=1 00:01:24.358 ++ NET_TYPE=phy 00:01:24.358 ++ RUN_NIGHTLY=1 00:01:24.358 + case $SPDK_TEST_NVMF_NICS in 00:01:24.358 + DRIVERS=ice 00:01:24.358 + [[ tcp == \r\d\m\a ]] 00:01:24.358 + [[ -n ice ]] 00:01:24.358 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.358 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.358 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:24.358 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.358 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.358 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.358 + true 00:01:24.358 + for D in $DRIVERS 00:01:24.358 + sudo modprobe ice 00:01:24.358 + exit 0 00:01:24.367 [Pipeline] } 00:01:24.386 [Pipeline] // withEnv 00:01:24.392 [Pipeline] } 00:01:24.410 [Pipeline] // stage 00:01:24.421 [Pipeline] catchError 00:01:24.423 [Pipeline] { 00:01:24.439 [Pipeline] timeout 00:01:24.439 Timeout set to expire in 50 min 00:01:24.441 [Pipeline] { 00:01:24.457 [Pipeline] stage 00:01:24.459 [Pipeline] { (Tests) 00:01:24.476 [Pipeline] sh 00:01:24.763 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.763 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.763 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.763 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:24.763 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.763 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.763 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:24.763 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.763 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.763 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.763 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:24.763 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.763 + source /etc/os-release 00:01:24.763 ++ NAME='Fedora Linux' 00:01:24.763 ++ VERSION='38 (Cloud Edition)' 00:01:24.763 ++ ID=fedora 00:01:24.763 ++ VERSION_ID=38 00:01:24.763 ++ VERSION_CODENAME= 00:01:24.763 ++ PLATFORM_ID=platform:f38 00:01:24.763 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.763 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.763 ++ LOGO=fedora-logo-icon 00:01:24.763 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.763 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.763 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.763 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.763 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.763 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.763 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.763 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.763 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.763 ++ SUPPORT_END=2024-05-14 00:01:24.763 ++ VARIANT='Cloud Edition' 00:01:24.763 ++ VARIANT_ID=cloud 00:01:24.763 + uname -a 00:01:24.763 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.763 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:25.701 Hugepages 00:01:25.701 node hugesize free / total 00:01:25.701 node0 1048576kB 0 / 0 00:01:25.701 node0 2048kB 0 / 0 00:01:25.701 node1 1048576kB 0 / 0 00:01:25.701 node1 2048kB 0 / 0 00:01:25.701 00:01:25.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.701 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:25.701 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:25.701 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:25.959 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:25.959 + rm -f /tmp/spdk-ld-path 00:01:25.959 + source autorun-spdk.conf 00:01:25.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.959 ++ SPDK_TEST_NVMF=1 00:01:25.959 ++ SPDK_TEST_NVME_CLI=1 00:01:25.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.959 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.959 ++ SPDK_RUN_ASAN=1 00:01:25.959 ++ SPDK_RUN_UBSAN=1 00:01:25.959 ++ NET_TYPE=phy 00:01:25.959 ++ RUN_NIGHTLY=1 00:01:25.959 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.959 + [[ -n '' ]] 00:01:25.959 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.959 + for M in /var/spdk/build-*-manifest.txt 00:01:25.959 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.959 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.959 + for M in /var/spdk/build-*-manifest.txt 00:01:25.959 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.959 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.959 ++ uname 00:01:25.959 + [[ Linux == \L\i\n\u\x ]] 00:01:25.959 + sudo dmesg -T 00:01:25.959 + sudo dmesg --clear 00:01:25.959 + dmesg_pid=845130 00:01:25.959 + [[ Fedora Linux == FreeBSD ]] 00:01:25.959 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.959 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.959 + sudo dmesg -Tw 00:01:25.959 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.959 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.959 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.959 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.959 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.959 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.959 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.959 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.959 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.959 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.959 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.959 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.959 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.959 Test configuration: 00:01:25.959 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.959 SPDK_TEST_NVMF=1 00:01:25.959 SPDK_TEST_NVME_CLI=1 00:01:25.959 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.959 SPDK_TEST_NVMF_NICS=e810 00:01:25.959 SPDK_RUN_ASAN=1 00:01:25.959 SPDK_RUN_UBSAN=1 00:01:25.959 NET_TYPE=phy 00:01:25.959 RUN_NIGHTLY=1 07:28:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:25.959 07:28:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.959 07:28:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.959 07:28:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.959 07:28:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.959 07:28:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.959 07:28:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.959 07:28:17 -- paths/export.sh@5 -- $ export PATH 00:01:25.960 07:28:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.960 07:28:17 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:25.960 07:28:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:25.960 07:28:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721021297.XXXXXX 00:01:25.960 07:28:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721021297.CKWmzC 00:01:25.960 07:28:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:25.960 07:28:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:25.960 07:28:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:25.960 07:28:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.960 07:28:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.960 07:28:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:25.960 07:28:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:25.960 07:28:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.960 07:28:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:25.960 07:28:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:25.960 07:28:17 -- pm/common@17 -- $ local monitor 00:01:25.960 07:28:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.960 07:28:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.960 07:28:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.960 07:28:17 -- pm/common@21 -- $ date +%s 00:01:25.960 07:28:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.960 07:28:17 -- pm/common@21 -- $ date +%s 00:01:25.960 07:28:17 -- pm/common@25 -- $ sleep 1 00:01:25.960 07:28:17 -- pm/common@21 -- $ date +%s 00:01:25.960 07:28:17 -- pm/common@21 -- $ date +%s 00:01:25.960 07:28:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021297 00:01:25.960 07:28:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021297 00:01:25.960 07:28:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021297 00:01:25.960 07:28:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021297 00:01:25.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021297_collect-vmstat.pm.log 00:01:25.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021297_collect-cpu-load.pm.log 00:01:25.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021297_collect-cpu-temp.pm.log 00:01:25.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021297_collect-bmc-pm.bmc.pm.log 00:01:26.896 07:28:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:26.896 07:28:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.896 07:28:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.896 07:28:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.896 07:28:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.896 Mon Jul 15 05:28:18 AM UTC 2024 00:01:26.896 07:28:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.896 v24.09-pre-202-g719d03c6a 00:01:26.896 07:28:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:26.896 07:28:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:26.896 07:28:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.896 07:28:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.896 07:28:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.155 ************************************ 00:01:27.155 START TEST asan 00:01:27.155 ************************************ 00:01:27.155 07:28:18 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:27.155 using asan 00:01:27.155 00:01:27.155 real 0m0.000s 00:01:27.155 user 0m0.000s 00:01:27.155 sys 0m0.000s 00:01:27.155 07:28:18 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.155 07:28:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.155 ************************************ 00:01:27.155 END TEST asan 00:01:27.155 ************************************ 00:01:27.155 07:28:18 -- common/autotest_common.sh@1142 -- $ return 0 00:01:27.155 07:28:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.155 07:28:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.155 07:28:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:27.155 07:28:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.155 07:28:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.155 ************************************ 00:01:27.155 START TEST ubsan 00:01:27.155 ************************************ 00:01:27.155 07:28:18 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:27.155 using ubsan 00:01:27.155 00:01:27.155 real 0m0.000s 00:01:27.155 user 0m0.000s 00:01:27.155 sys 0m0.000s 00:01:27.155 07:28:18 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.155 07:28:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.155 ************************************ 00:01:27.155 END TEST ubsan 00:01:27.155 ************************************ 00:01:27.155 07:28:18 -- common/autotest_common.sh@1142 -- $ return 0 00:01:27.155 07:28:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.155 07:28:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.155 07:28:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.155 07:28:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.155 07:28:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.155 07:28:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.155 07:28:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.155 07:28:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.155 07:28:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:27.155 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:27.155 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:27.413 Using 'verbs' RDMA provider 00:01:37.960 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:47.962 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:47.962 Creating mk/config.mk...done. 00:01:47.962 Creating mk/cc.flags.mk...done. 00:01:47.962 Type 'make' to build. 00:01:47.962 07:28:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:47.962 07:28:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:47.962 07:28:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.962 07:28:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.962 ************************************ 00:01:47.962 START TEST make 00:01:47.962 ************************************ 00:01:47.962 07:28:38 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:47.962 make[1]: Nothing to be done for 'all'. 00:01:56.095 The Meson build system 00:01:56.095 Version: 1.3.1 00:01:56.095 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:56.095 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:56.095 Build type: native build 00:01:56.095 Program cat found: YES (/usr/bin/cat) 00:01:56.095 Project name: DPDK 00:01:56.095 Project version: 24.03.0 00:01:56.095 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:56.095 C linker for the host machine: cc ld.bfd 2.39-16 00:01:56.095 Host machine cpu family: x86_64 00:01:56.095 Host machine cpu: x86_64 00:01:56.095 Message: ## Building in Developer Mode ## 00:01:56.095 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.095 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.095 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.095 Program python3 found: YES (/usr/bin/python3) 00:01:56.095 Program cat found: YES (/usr/bin/cat) 00:01:56.095 Compiler for C supports arguments -march=native: YES 00:01:56.095 Checking for size of "void *" : 8 00:01:56.095 Checking for size of "void *" : 8 (cached) 00:01:56.095 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:56.095 Library m found: YES 00:01:56.095 Library numa found: YES 00:01:56.095 Has header "numaif.h" : YES 00:01:56.095 Library fdt found: NO 00:01:56.095 Library execinfo found: NO 00:01:56.095 Has header "execinfo.h" : YES 00:01:56.095 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:56.095 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.095 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.095 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.095 Run-time dependency openssl found: YES 3.0.9 00:01:56.095 Run-time dependency libpcap found: YES 1.10.4 00:01:56.095 Has header "pcap.h" with dependency libpcap: YES 00:01:56.095 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.095 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.095 Compiler for C supports arguments -Wformat: YES 00:01:56.095 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.095 Compiler for C supports arguments -Wformat-security: NO 00:01:56.095 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.095 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.095 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.095 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.095 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.095 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.095 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.095 Compiler for C supports arguments -Wundef: YES 00:01:56.095 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.095 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.095 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.095 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.095 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.095 Program objdump found: YES (/usr/bin/objdump) 00:01:56.095 Compiler for C supports arguments -mavx512f: YES 00:01:56.095 Checking if "AVX512 checking" compiles: YES 00:01:56.095 Fetching value of define "__SSE4_2__" : 1 00:01:56.095 Fetching value of define "__AES__" : 1 00:01:56.095 Fetching value of define "__AVX__" : 1 00:01:56.095 Fetching value of define "__AVX2__" : (undefined) 00:01:56.095 Fetching value of define "__AVX512BW__" : (undefined) 00:01:56.095 Fetching value of define "__AVX512CD__" : (undefined) 00:01:56.095 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:56.095 Fetching value of define "__AVX512F__" : (undefined) 00:01:56.095 Fetching value of define "__AVX512VL__" : (undefined) 00:01:56.095 Fetching value of define "__PCLMUL__" : 1 00:01:56.095 Fetching value of define "__RDRND__" : 1 00:01:56.095 Fetching value of define "__RDSEED__" : (undefined) 00:01:56.095 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.095 Fetching value of define "__znver1__" : (undefined) 00:01:56.095 Fetching value of define "__znver2__" : (undefined) 00:01:56.095 Fetching value of define "__znver3__" : (undefined) 00:01:56.095 Fetching value of define "__znver4__" : (undefined) 00:01:56.095 Library asan found: YES 00:01:56.095 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.095 Message: lib/log: Defining dependency "log" 00:01:56.095 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.095 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.095 Library rt found: YES 00:01:56.095 Checking for function "getentropy" : NO 00:01:56.095 Message: lib/eal: Defining dependency "eal" 00:01:56.095 Message: lib/ring: Defining dependency "ring" 00:01:56.095 Message: lib/rcu: Defining dependency "rcu" 00:01:56.095 Message: lib/mempool: Defining dependency "mempool" 00:01:56.095 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.095 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.095 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:56.095 Compiler for C supports arguments -mpclmul: YES 00:01:56.095 Compiler for C supports arguments -maes: YES 00:01:56.095 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.095 Compiler for C supports arguments -mavx512bw: YES 00:01:56.095 Compiler for C supports arguments -mavx512dq: YES 00:01:56.095 Compiler for C supports arguments -mavx512vl: YES 00:01:56.095 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.095 Compiler for C supports arguments -mavx2: YES 00:01:56.095 Compiler for C supports arguments -mavx: YES 00:01:56.095 Message: lib/net: Defining dependency "net" 00:01:56.095 Message: lib/meter: Defining dependency "meter" 00:01:56.095 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.095 Message: lib/pci: Defining dependency "pci" 00:01:56.095 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.095 Message: lib/hash: Defining dependency "hash" 00:01:56.095 Message: lib/timer: Defining dependency "timer" 00:01:56.095 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.095 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.095 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.095 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.095 Message: lib/power: Defining dependency "power" 00:01:56.095 Message: lib/reorder: Defining dependency "reorder" 00:01:56.095 Message: lib/security: Defining dependency "security" 00:01:56.095 Has header "linux/userfaultfd.h" : YES 00:01:56.095 Has header "linux/vduse.h" : YES 00:01:56.095 Message: lib/vhost: Defining dependency "vhost" 00:01:56.096 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:56.096 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:56.096 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:56.096 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:56.096 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:56.096 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:56.096 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:56.096 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:56.096 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:56.096 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:56.096 Program doxygen found: YES (/usr/bin/doxygen) 00:01:56.096 Configuring doxy-api-html.conf using configuration 00:01:56.096 Configuring doxy-api-man.conf using configuration 00:01:56.096 Program mandb found: YES (/usr/bin/mandb) 00:01:56.096 Program sphinx-build found: NO 00:01:56.096 Configuring rte_build_config.h using configuration 00:01:56.096 Message: 00:01:56.096 ================= 00:01:56.096 Applications Enabled 00:01:56.096 ================= 00:01:56.096 00:01:56.096 apps: 00:01:56.096 00:01:56.096 00:01:56.096 Message: 00:01:56.096 ================= 00:01:56.096 Libraries Enabled 00:01:56.096 ================= 00:01:56.096 00:01:56.096 libs: 00:01:56.096 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:56.096 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:56.096 cryptodev, dmadev, power, reorder, security, vhost, 00:01:56.096 00:01:56.096 Message: 00:01:56.096 =============== 00:01:56.096 Drivers Enabled 00:01:56.096 =============== 00:01:56.096 00:01:56.096 common: 00:01:56.096 00:01:56.096 bus: 00:01:56.096 pci, vdev, 00:01:56.096 mempool: 00:01:56.096 ring, 00:01:56.096 dma: 00:01:56.096 00:01:56.096 net: 00:01:56.096 00:01:56.096 crypto: 00:01:56.096 00:01:56.096 compress: 00:01:56.096 00:01:56.096 vdpa: 00:01:56.096 00:01:56.096 00:01:56.096 Message: 00:01:56.096 ================= 00:01:56.096 Content Skipped 00:01:56.096 ================= 00:01:56.096 00:01:56.096 apps: 00:01:56.096 dumpcap: explicitly disabled via build config 00:01:56.096 graph: explicitly disabled via build config 00:01:56.096 pdump: explicitly disabled via build config 00:01:56.096 proc-info: explicitly disabled via build config 00:01:56.096 test-acl: explicitly disabled via build config 00:01:56.096 test-bbdev: explicitly disabled via build config 00:01:56.096 test-cmdline: explicitly disabled via build config 00:01:56.096 test-compress-perf: explicitly disabled via build config 00:01:56.096 test-crypto-perf: explicitly disabled via build config 00:01:56.096 test-dma-perf: explicitly disabled via build config 00:01:56.096 test-eventdev: explicitly disabled via build config 00:01:56.096 test-fib: explicitly disabled via build config 00:01:56.096 test-flow-perf: explicitly disabled via build config 00:01:56.096 test-gpudev: explicitly disabled via build config 00:01:56.096 test-mldev: explicitly disabled via build config 00:01:56.096 test-pipeline: explicitly disabled via build config 00:01:56.096 test-pmd: explicitly disabled via build config 00:01:56.096 test-regex: explicitly disabled via build config 00:01:56.096 test-sad: explicitly disabled via build config 00:01:56.096 test-security-perf: explicitly disabled via build config 00:01:56.096 00:01:56.096 libs: 00:01:56.096 argparse: explicitly disabled via build config 00:01:56.096 metrics: explicitly disabled via build config 00:01:56.096 acl: explicitly disabled via build config 00:01:56.096 bbdev: explicitly disabled via build config 00:01:56.096 bitratestats: explicitly disabled via build config 00:01:56.096 bpf: explicitly disabled via build config 00:01:56.096 cfgfile: explicitly disabled via build config 00:01:56.096 distributor: explicitly disabled via build config 00:01:56.096 efd: explicitly disabled via build config 00:01:56.096 eventdev: explicitly disabled via build config 00:01:56.096 dispatcher: explicitly disabled via build config 00:01:56.096 gpudev: explicitly disabled via build config 00:01:56.096 gro: explicitly disabled via build config 00:01:56.096 gso: explicitly disabled via build config 00:01:56.096 ip_frag: explicitly disabled via build config 00:01:56.096 jobstats: explicitly disabled via build config 00:01:56.096 latencystats: explicitly disabled via build config 00:01:56.096 lpm: explicitly disabled via build config 00:01:56.096 member: explicitly disabled via build config 00:01:56.096 pcapng: explicitly disabled via build config 00:01:56.096 rawdev: explicitly disabled via build config 00:01:56.096 regexdev: explicitly disabled via build config 00:01:56.096 mldev: explicitly disabled via build config 00:01:56.096 rib: explicitly disabled via build config 00:01:56.096 sched: explicitly disabled via build config 00:01:56.096 stack: explicitly disabled via build config 00:01:56.096 ipsec: explicitly disabled via build config 00:01:56.096 pdcp: explicitly disabled via build config 00:01:56.096 fib: explicitly disabled via build config 00:01:56.096 port: explicitly disabled via build config 00:01:56.096 pdump: explicitly disabled via build config 00:01:56.096 table: explicitly disabled via build config 00:01:56.096 pipeline: explicitly disabled via build config 00:01:56.096 graph: explicitly disabled via build config 00:01:56.096 node: explicitly disabled via build config 00:01:56.096 00:01:56.096 drivers: 00:01:56.096 common/cpt: not in enabled drivers build config 00:01:56.096 common/dpaax: not in enabled drivers build config 00:01:56.096 common/iavf: not in enabled drivers build config 00:01:56.096 common/idpf: not in enabled drivers build config 00:01:56.096 common/ionic: not in enabled drivers build config 00:01:56.096 common/mvep: not in enabled drivers build config 00:01:56.096 common/octeontx: not in enabled drivers build config 00:01:56.096 bus/auxiliary: not in enabled drivers build config 00:01:56.096 bus/cdx: not in enabled drivers build config 00:01:56.096 bus/dpaa: not in enabled drivers build config 00:01:56.096 bus/fslmc: not in enabled drivers build config 00:01:56.096 bus/ifpga: not in enabled drivers build config 00:01:56.096 bus/platform: not in enabled drivers build config 00:01:56.096 bus/uacce: not in enabled drivers build config 00:01:56.096 bus/vmbus: not in enabled drivers build config 00:01:56.096 common/cnxk: not in enabled drivers build config 00:01:56.096 common/mlx5: not in enabled drivers build config 00:01:56.096 common/nfp: not in enabled drivers build config 00:01:56.096 common/nitrox: not in enabled drivers build config 00:01:56.096 common/qat: not in enabled drivers build config 00:01:56.096 common/sfc_efx: not in enabled drivers build config 00:01:56.096 mempool/bucket: not in enabled drivers build config 00:01:56.096 mempool/cnxk: not in enabled drivers build config 00:01:56.096 mempool/dpaa: not in enabled drivers build config 00:01:56.096 mempool/dpaa2: not in enabled drivers build config 00:01:56.096 mempool/octeontx: not in enabled drivers build config 00:01:56.096 mempool/stack: not in enabled drivers build config 00:01:56.096 dma/cnxk: not in enabled drivers build config 00:01:56.096 dma/dpaa: not in enabled drivers build config 00:01:56.096 dma/dpaa2: not in enabled drivers build config 00:01:56.096 dma/hisilicon: not in enabled drivers build config 00:01:56.096 dma/idxd: not in enabled drivers build config 00:01:56.096 dma/ioat: not in enabled drivers build config 00:01:56.096 dma/skeleton: not in enabled drivers build config 00:01:56.096 net/af_packet: not in enabled drivers build config 00:01:56.096 net/af_xdp: not in enabled drivers build config 00:01:56.096 net/ark: not in enabled drivers build config 00:01:56.096 net/atlantic: not in enabled drivers build config 00:01:56.096 net/avp: not in enabled drivers build config 00:01:56.096 net/axgbe: not in enabled drivers build config 00:01:56.096 net/bnx2x: not in enabled drivers build config 00:01:56.096 net/bnxt: not in enabled drivers build config 00:01:56.096 net/bonding: not in enabled drivers build config 00:01:56.096 net/cnxk: not in enabled drivers build config 00:01:56.096 net/cpfl: not in enabled drivers build config 00:01:56.096 net/cxgbe: not in enabled drivers build config 00:01:56.096 net/dpaa: not in enabled drivers build config 00:01:56.096 net/dpaa2: not in enabled drivers build config 00:01:56.096 net/e1000: not in enabled drivers build config 00:01:56.096 net/ena: not in enabled drivers build config 00:01:56.096 net/enetc: not in enabled drivers build config 00:01:56.096 net/enetfec: not in enabled drivers build config 00:01:56.096 net/enic: not in enabled drivers build config 00:01:56.096 net/failsafe: not in enabled drivers build config 00:01:56.096 net/fm10k: not in enabled drivers build config 00:01:56.096 net/gve: not in enabled drivers build config 00:01:56.096 net/hinic: not in enabled drivers build config 00:01:56.096 net/hns3: not in enabled drivers build config 00:01:56.096 net/i40e: not in enabled drivers build config 00:01:56.096 net/iavf: not in enabled drivers build config 00:01:56.096 net/ice: not in enabled drivers build config 00:01:56.096 net/idpf: not in enabled drivers build config 00:01:56.096 net/igc: not in enabled drivers build config 00:01:56.096 net/ionic: not in enabled drivers build config 00:01:56.096 net/ipn3ke: not in enabled drivers build config 00:01:56.096 net/ixgbe: not in enabled drivers build config 00:01:56.096 net/mana: not in enabled drivers build config 00:01:56.096 net/memif: not in enabled drivers build config 00:01:56.096 net/mlx4: not in enabled drivers build config 00:01:56.096 net/mlx5: not in enabled drivers build config 00:01:56.096 net/mvneta: not in enabled drivers build config 00:01:56.096 net/mvpp2: not in enabled drivers build config 00:01:56.096 net/netvsc: not in enabled drivers build config 00:01:56.096 net/nfb: not in enabled drivers build config 00:01:56.096 net/nfp: not in enabled drivers build config 00:01:56.096 net/ngbe: not in enabled drivers build config 00:01:56.096 net/null: not in enabled drivers build config 00:01:56.096 net/octeontx: not in enabled drivers build config 00:01:56.096 net/octeon_ep: not in enabled drivers build config 00:01:56.096 net/pcap: not in enabled drivers build config 00:01:56.096 net/pfe: not in enabled drivers build config 00:01:56.096 net/qede: not in enabled drivers build config 00:01:56.096 net/ring: not in enabled drivers build config 00:01:56.096 net/sfc: not in enabled drivers build config 00:01:56.096 net/softnic: not in enabled drivers build config 00:01:56.096 net/tap: not in enabled drivers build config 00:01:56.096 net/thunderx: not in enabled drivers build config 00:01:56.096 net/txgbe: not in enabled drivers build config 00:01:56.096 net/vdev_netvsc: not in enabled drivers build config 00:01:56.096 net/vhost: not in enabled drivers build config 00:01:56.096 net/virtio: not in enabled drivers build config 00:01:56.096 net/vmxnet3: not in enabled drivers build config 00:01:56.096 raw/*: missing internal dependency, "rawdev" 00:01:56.096 crypto/armv8: not in enabled drivers build config 00:01:56.096 crypto/bcmfs: not in enabled drivers build config 00:01:56.097 crypto/caam_jr: not in enabled drivers build config 00:01:56.097 crypto/ccp: not in enabled drivers build config 00:01:56.097 crypto/cnxk: not in enabled drivers build config 00:01:56.097 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.097 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.097 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.097 crypto/mlx5: not in enabled drivers build config 00:01:56.097 crypto/mvsam: not in enabled drivers build config 00:01:56.097 crypto/nitrox: not in enabled drivers build config 00:01:56.097 crypto/null: not in enabled drivers build config 00:01:56.097 crypto/octeontx: not in enabled drivers build config 00:01:56.097 crypto/openssl: not in enabled drivers build config 00:01:56.097 crypto/scheduler: not in enabled drivers build config 00:01:56.097 crypto/uadk: not in enabled drivers build config 00:01:56.097 crypto/virtio: not in enabled drivers build config 00:01:56.097 compress/isal: not in enabled drivers build config 00:01:56.097 compress/mlx5: not in enabled drivers build config 00:01:56.097 compress/nitrox: not in enabled drivers build config 00:01:56.097 compress/octeontx: not in enabled drivers build config 00:01:56.097 compress/zlib: not in enabled drivers build config 00:01:56.097 regex/*: missing internal dependency, "regexdev" 00:01:56.097 ml/*: missing internal dependency, "mldev" 00:01:56.097 vdpa/ifc: not in enabled drivers build config 00:01:56.097 vdpa/mlx5: not in enabled drivers build config 00:01:56.097 vdpa/nfp: not in enabled drivers build config 00:01:56.097 vdpa/sfc: not in enabled drivers build config 00:01:56.097 event/*: missing internal dependency, "eventdev" 00:01:56.097 baseband/*: missing internal dependency, "bbdev" 00:01:56.097 gpu/*: missing internal dependency, "gpudev" 00:01:56.097 00:01:56.097 00:01:56.663 Build targets in project: 85 00:01:56.663 00:01:56.663 DPDK 24.03.0 00:01:56.663 00:01:56.663 User defined options 00:01:56.663 buildtype : debug 00:01:56.663 default_library : shared 00:01:56.663 libdir : lib 00:01:56.663 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:56.663 b_sanitize : address 00:01:56.663 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:56.663 c_link_args : 00:01:56.663 cpu_instruction_set: native 00:01:56.663 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:56.663 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:56.663 enable_docs : false 00:01:56.663 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:56.663 enable_kmods : false 00:01:56.663 max_lcores : 128 00:01:56.663 tests : false 00:01:56.663 00:01:56.663 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.932 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.932 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.932 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.932 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.932 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.932 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.190 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.190 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.190 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.190 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.190 [10/268] Linking static target lib/librte_kvargs.a 00:01:57.190 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.190 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.190 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.190 [14/268] Linking static target lib/librte_log.a 00:01:57.190 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.190 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.765 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.766 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.766 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.766 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.029 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.029 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.029 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.029 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.029 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.029 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.029 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.029 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.029 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.029 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.029 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.029 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.029 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.029 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.029 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.029 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.029 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.029 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.029 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.029 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.029 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.029 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.029 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.029 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.029 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.029 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.029 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.029 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.029 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.029 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.029 [51/268] Linking static target lib/librte_telemetry.a 00:01:58.029 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.029 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.029 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.029 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.029 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.029 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.290 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.290 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.290 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.290 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.290 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.290 [63/268] Linking target lib/librte_log.so.24.1 00:01:58.290 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.557 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.557 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.819 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.819 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.819 [69/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:58.819 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.819 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.819 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.819 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.819 [74/268] Linking target lib/librte_kvargs.so.24.1 00:01:58.819 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.819 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.819 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.819 [78/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:59.080 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:59.080 [80/268] Linking static target lib/librte_pci.a 00:01:59.080 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:59.080 [82/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.080 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:59.080 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:59.080 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:59.080 [86/268] Linking static target lib/librte_meter.a 00:01:59.080 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:59.080 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:59.080 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.080 [90/268] Linking static target lib/librte_ring.a 00:01:59.080 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:59.080 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:59.080 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:59.080 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:59.080 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:59.080 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:59.080 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:59.081 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:59.081 [99/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.081 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:59.081 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:59.081 [102/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:59.081 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:59.081 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:59.081 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:59.081 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.081 [107/268] Linking target lib/librte_telemetry.so.24.1 00:01:59.343 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.343 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:59.343 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.343 [111/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.343 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:59.343 [113/268] Linking static target lib/librte_mempool.a 00:01:59.343 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.343 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:59.343 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.343 [117/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.343 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.343 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.343 [120/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:59.604 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.604 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:59.604 [123/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.604 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.604 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:59.604 [126/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.604 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.604 [128/268] Linking static target lib/librte_rcu.a 00:01:59.604 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.604 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:59.866 [131/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.866 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:59.866 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:59.866 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:59.866 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:59.866 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:59.866 [137/268] Linking static target lib/librte_cmdline.a 00:01:59.866 [138/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.866 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.866 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:00.126 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.126 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.126 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:00.126 [144/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.126 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.126 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:00.126 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.126 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.126 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.126 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.126 [151/268] Linking static target lib/librte_eal.a 00:02:00.126 [152/268] Linking static target lib/librte_timer.a 00:02:00.126 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.126 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.386 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.386 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.386 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.386 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:00.386 [159/268] Linking static target lib/librte_dmadev.a 00:02:00.386 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.646 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.646 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:00.646 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.646 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:00.905 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.905 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:00.905 [167/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.905 [168/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.905 [169/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:00.905 [170/268] Linking static target lib/librte_net.a 00:02:00.905 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.905 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:00.905 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.905 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.905 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.905 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.905 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:00.905 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.905 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:00.905 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.905 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.164 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.164 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.164 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:01.164 [185/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:01.164 [186/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.164 [187/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.164 [188/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.164 [189/268] Linking static target drivers/librte_bus_vdev.a 00:02:01.164 [190/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.164 [191/268] Linking static target lib/librte_power.a 00:02:01.164 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.164 [193/268] Linking static target lib/librte_hash.a 00:02:01.164 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.423 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.423 [196/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.423 [197/268] Linking static target drivers/librte_bus_pci.a 00:02:01.423 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.423 [199/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.423 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.423 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.423 [202/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.423 [203/268] Linking static target lib/librte_compressdev.a 00:02:01.423 [204/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.423 [205/268] Linking static target lib/librte_reorder.a 00:02:01.681 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.681 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.681 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.681 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:01.681 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.681 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.681 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.681 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.681 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.939 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.197 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.197 [217/268] Linking static target lib/librte_security.a 00:02:02.454 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.711 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.969 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.969 [221/268] Linking static target lib/librte_mbuf.a 00:02:03.532 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.791 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:03.791 [224/268] Linking static target lib/librte_cryptodev.a 00:02:04.726 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.726 [226/268] Linking static target lib/librte_ethdev.a 00:02:04.726 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.196 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.196 [229/268] Linking target lib/librte_eal.so.24.1 00:02:06.196 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.455 [231/268] Linking target lib/librte_ring.so.24.1 00:02:06.455 [232/268] Linking target lib/librte_meter.so.24.1 00:02:06.455 [233/268] Linking target lib/librte_dmadev.so.24.1 00:02:06.455 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.455 [235/268] Linking target lib/librte_pci.so.24.1 00:02:06.455 [236/268] Linking target lib/librte_timer.so.24.1 00:02:06.455 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.455 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.455 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.455 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.455 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.455 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:06.455 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.455 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:06.713 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.713 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.713 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:06.713 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:06.971 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.971 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:06.971 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:06.971 [252/268] Linking target lib/librte_net.so.24.1 00:02:06.971 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:06.971 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.971 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:06.971 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:06.971 [257/268] Linking target lib/librte_hash.so.24.1 00:02:06.971 [258/268] Linking target lib/librte_security.so.24.1 00:02:07.229 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.488 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.388 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.388 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:09.388 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:09.388 [264/268] Linking target lib/librte_power.so.24.1 00:02:35.935 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.935 [266/268] Linking static target lib/librte_vhost.a 00:02:35.935 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.935 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.935 INFO: autodetecting backend as ninja 00:02:35.935 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:36.194 CC lib/ut_mock/mock.o 00:02:36.194 CC lib/ut/ut.o 00:02:36.194 CC lib/log/log.o 00:02:36.194 CC lib/log/log_flags.o 00:02:36.194 CC lib/log/log_deprecated.o 00:02:36.452 LIB libspdk_log.a 00:02:36.452 LIB libspdk_ut.a 00:02:36.452 LIB libspdk_ut_mock.a 00:02:36.452 SO libspdk_ut.so.2.0 00:02:36.452 SO libspdk_ut_mock.so.6.0 00:02:36.452 SO libspdk_log.so.7.0 00:02:36.452 SYMLINK libspdk_ut.so 00:02:36.452 SYMLINK libspdk_ut_mock.so 00:02:36.452 SYMLINK libspdk_log.so 00:02:36.709 CXX lib/trace_parser/trace.o 00:02:36.709 CC lib/dma/dma.o 00:02:36.710 CC lib/ioat/ioat.o 00:02:36.710 CC lib/util/base64.o 00:02:36.710 CC lib/util/bit_array.o 00:02:36.710 CC lib/util/cpuset.o 00:02:36.710 CC lib/util/crc16.o 00:02:36.710 CC lib/util/crc32.o 00:02:36.710 CC lib/util/crc32c.o 00:02:36.710 CC lib/util/crc32_ieee.o 00:02:36.710 CC lib/util/crc64.o 00:02:36.710 CC lib/util/dif.o 00:02:36.710 CC lib/util/fd.o 00:02:36.710 CC lib/util/file.o 00:02:36.710 CC lib/util/hexlify.o 00:02:36.710 CC lib/util/iov.o 00:02:36.710 CC lib/util/math.o 00:02:36.710 CC lib/util/pipe.o 00:02:36.710 CC lib/util/strerror_tls.o 00:02:36.710 CC lib/util/string.o 00:02:36.710 CC lib/util/uuid.o 00:02:36.710 CC lib/util/fd_group.o 00:02:36.710 CC lib/util/xor.o 00:02:36.710 CC lib/util/zipf.o 00:02:36.710 CC lib/vfio_user/host/vfio_user_pci.o 00:02:36.710 CC lib/vfio_user/host/vfio_user.o 00:02:36.710 LIB libspdk_dma.a 00:02:36.968 SO libspdk_dma.so.4.0 00:02:36.968 SYMLINK libspdk_dma.so 00:02:36.968 LIB libspdk_vfio_user.a 00:02:36.968 SO libspdk_vfio_user.so.5.0 00:02:36.968 LIB libspdk_ioat.a 00:02:36.968 SO libspdk_ioat.so.7.0 00:02:36.968 SYMLINK libspdk_vfio_user.so 00:02:37.225 SYMLINK libspdk_ioat.so 00:02:37.225 LIB libspdk_util.a 00:02:37.483 SO libspdk_util.so.9.1 00:02:37.483 SYMLINK libspdk_util.so 00:02:37.741 LIB libspdk_trace_parser.a 00:02:37.741 CC lib/rdma_utils/rdma_utils.o 00:02:37.741 CC lib/json/json_parse.o 00:02:37.741 CC lib/env_dpdk/env.o 00:02:37.741 CC lib/rdma_provider/common.o 00:02:37.741 CC lib/conf/conf.o 00:02:37.741 CC lib/idxd/idxd.o 00:02:37.741 CC lib/vmd/vmd.o 00:02:37.741 CC lib/env_dpdk/memory.o 00:02:37.741 CC lib/json/json_util.o 00:02:37.741 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:37.741 CC lib/idxd/idxd_user.o 00:02:37.741 CC lib/json/json_write.o 00:02:37.741 CC lib/vmd/led.o 00:02:37.741 CC lib/idxd/idxd_kernel.o 00:02:37.741 CC lib/env_dpdk/pci.o 00:02:37.741 CC lib/env_dpdk/init.o 00:02:37.741 CC lib/env_dpdk/threads.o 00:02:37.741 CC lib/env_dpdk/pci_ioat.o 00:02:37.741 CC lib/env_dpdk/pci_virtio.o 00:02:37.741 CC lib/env_dpdk/pci_vmd.o 00:02:37.741 CC lib/env_dpdk/pci_idxd.o 00:02:37.741 CC lib/env_dpdk/pci_event.o 00:02:37.741 CC lib/env_dpdk/sigbus_handler.o 00:02:37.741 CC lib/env_dpdk/pci_dpdk.o 00:02:37.741 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:37.741 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.741 SO libspdk_trace_parser.so.5.0 00:02:37.741 SYMLINK libspdk_trace_parser.so 00:02:37.999 LIB libspdk_rdma_provider.a 00:02:37.999 SO libspdk_rdma_provider.so.6.0 00:02:37.999 SYMLINK libspdk_rdma_provider.so 00:02:37.999 LIB libspdk_rdma_utils.a 00:02:37.999 SO libspdk_rdma_utils.so.1.0 00:02:37.999 LIB libspdk_conf.a 00:02:37.999 SO libspdk_conf.so.6.0 00:02:37.999 LIB libspdk_json.a 00:02:37.999 SYMLINK libspdk_rdma_utils.so 00:02:38.258 SO libspdk_json.so.6.0 00:02:38.258 SYMLINK libspdk_conf.so 00:02:38.258 SYMLINK libspdk_json.so 00:02:38.258 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.258 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.258 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.258 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.516 LIB libspdk_idxd.a 00:02:38.516 SO libspdk_idxd.so.12.0 00:02:38.516 LIB libspdk_vmd.a 00:02:38.516 SYMLINK libspdk_idxd.so 00:02:38.516 SO libspdk_vmd.so.6.0 00:02:38.775 SYMLINK libspdk_vmd.so 00:02:38.775 LIB libspdk_jsonrpc.a 00:02:38.775 SO libspdk_jsonrpc.so.6.0 00:02:38.775 SYMLINK libspdk_jsonrpc.so 00:02:39.033 CC lib/rpc/rpc.o 00:02:39.291 LIB libspdk_rpc.a 00:02:39.291 SO libspdk_rpc.so.6.0 00:02:39.291 SYMLINK libspdk_rpc.so 00:02:39.549 CC lib/keyring/keyring.o 00:02:39.549 CC lib/trace/trace.o 00:02:39.549 CC lib/keyring/keyring_rpc.o 00:02:39.549 CC lib/notify/notify.o 00:02:39.549 CC lib/trace/trace_flags.o 00:02:39.549 CC lib/trace/trace_rpc.o 00:02:39.549 CC lib/notify/notify_rpc.o 00:02:39.549 LIB libspdk_notify.a 00:02:39.549 SO libspdk_notify.so.6.0 00:02:39.807 SYMLINK libspdk_notify.so 00:02:39.807 LIB libspdk_keyring.a 00:02:39.807 SO libspdk_keyring.so.1.0 00:02:39.807 LIB libspdk_trace.a 00:02:39.807 SO libspdk_trace.so.10.0 00:02:39.807 SYMLINK libspdk_keyring.so 00:02:39.807 SYMLINK libspdk_trace.so 00:02:40.065 CC lib/sock/sock.o 00:02:40.065 CC lib/sock/sock_rpc.o 00:02:40.065 CC lib/thread/thread.o 00:02:40.065 CC lib/thread/iobuf.o 00:02:40.666 LIB libspdk_sock.a 00:02:40.666 SO libspdk_sock.so.10.0 00:02:40.666 SYMLINK libspdk_sock.so 00:02:40.666 LIB libspdk_env_dpdk.a 00:02:40.666 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:40.666 CC lib/nvme/nvme_ctrlr.o 00:02:40.666 CC lib/nvme/nvme_fabric.o 00:02:40.666 CC lib/nvme/nvme_ns_cmd.o 00:02:40.666 CC lib/nvme/nvme_ns.o 00:02:40.666 CC lib/nvme/nvme_pcie_common.o 00:02:40.666 CC lib/nvme/nvme_pcie.o 00:02:40.666 CC lib/nvme/nvme_qpair.o 00:02:40.666 CC lib/nvme/nvme.o 00:02:40.666 CC lib/nvme/nvme_quirks.o 00:02:40.666 CC lib/nvme/nvme_transport.o 00:02:40.666 CC lib/nvme/nvme_discovery.o 00:02:40.666 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.666 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.666 CC lib/nvme/nvme_tcp.o 00:02:40.666 CC lib/nvme/nvme_opal.o 00:02:40.666 CC lib/nvme/nvme_io_msg.o 00:02:40.666 CC lib/nvme/nvme_poll_group.o 00:02:40.666 CC lib/nvme/nvme_zns.o 00:02:40.666 CC lib/nvme/nvme_stubs.o 00:02:40.666 CC lib/nvme/nvme_auth.o 00:02:40.666 CC lib/nvme/nvme_cuse.o 00:02:40.666 CC lib/nvme/nvme_rdma.o 00:02:40.666 SO libspdk_env_dpdk.so.14.1 00:02:40.925 SYMLINK libspdk_env_dpdk.so 00:02:42.300 LIB libspdk_thread.a 00:02:42.300 SO libspdk_thread.so.10.1 00:02:42.300 SYMLINK libspdk_thread.so 00:02:42.300 CC lib/accel/accel.o 00:02:42.300 CC lib/virtio/virtio.o 00:02:42.300 CC lib/init/json_config.o 00:02:42.300 CC lib/blob/blobstore.o 00:02:42.300 CC lib/virtio/virtio_vhost_user.o 00:02:42.300 CC lib/init/subsystem.o 00:02:42.300 CC lib/accel/accel_rpc.o 00:02:42.300 CC lib/blob/request.o 00:02:42.300 CC lib/init/subsystem_rpc.o 00:02:42.300 CC lib/virtio/virtio_vfio_user.o 00:02:42.300 CC lib/blob/zeroes.o 00:02:42.300 CC lib/accel/accel_sw.o 00:02:42.300 CC lib/init/rpc.o 00:02:42.300 CC lib/virtio/virtio_pci.o 00:02:42.300 CC lib/blob/blob_bs_dev.o 00:02:42.557 LIB libspdk_init.a 00:02:42.557 SO libspdk_init.so.5.0 00:02:42.557 SYMLINK libspdk_init.so 00:02:42.814 LIB libspdk_virtio.a 00:02:42.814 SO libspdk_virtio.so.7.0 00:02:42.814 SYMLINK libspdk_virtio.so 00:02:42.814 CC lib/event/app.o 00:02:42.814 CC lib/event/reactor.o 00:02:42.814 CC lib/event/log_rpc.o 00:02:42.814 CC lib/event/app_rpc.o 00:02:42.814 CC lib/event/scheduler_static.o 00:02:43.380 LIB libspdk_event.a 00:02:43.380 SO libspdk_event.so.14.0 00:02:43.638 SYMLINK libspdk_event.so 00:02:43.638 LIB libspdk_accel.a 00:02:43.638 SO libspdk_accel.so.15.1 00:02:43.638 LIB libspdk_nvme.a 00:02:43.638 SYMLINK libspdk_accel.so 00:02:43.895 SO libspdk_nvme.so.13.1 00:02:43.895 CC lib/bdev/bdev.o 00:02:43.895 CC lib/bdev/bdev_rpc.o 00:02:43.895 CC lib/bdev/bdev_zone.o 00:02:43.895 CC lib/bdev/part.o 00:02:43.895 CC lib/bdev/scsi_nvme.o 00:02:44.153 SYMLINK libspdk_nvme.so 00:02:46.679 LIB libspdk_blob.a 00:02:46.679 SO libspdk_blob.so.11.0 00:02:46.679 SYMLINK libspdk_blob.so 00:02:46.679 CC lib/lvol/lvol.o 00:02:46.679 CC lib/blobfs/blobfs.o 00:02:46.679 CC lib/blobfs/tree.o 00:02:47.244 LIB libspdk_bdev.a 00:02:47.244 SO libspdk_bdev.so.15.1 00:02:47.244 SYMLINK libspdk_bdev.so 00:02:47.508 CC lib/nbd/nbd.o 00:02:47.508 CC lib/scsi/dev.o 00:02:47.508 CC lib/nbd/nbd_rpc.o 00:02:47.508 CC lib/scsi/lun.o 00:02:47.508 CC lib/ftl/ftl_core.o 00:02:47.508 CC lib/scsi/port.o 00:02:47.508 CC lib/ftl/ftl_init.o 00:02:47.508 CC lib/scsi/scsi.o 00:02:47.508 CC lib/ftl/ftl_layout.o 00:02:47.508 CC lib/scsi/scsi_bdev.o 00:02:47.508 CC lib/ublk/ublk.o 00:02:47.508 CC lib/nvmf/ctrlr.o 00:02:47.508 CC lib/ftl/ftl_debug.o 00:02:47.508 CC lib/scsi/scsi_pr.o 00:02:47.508 CC lib/nvmf/ctrlr_discovery.o 00:02:47.508 CC lib/scsi/scsi_rpc.o 00:02:47.508 CC lib/ublk/ublk_rpc.o 00:02:47.508 CC lib/ftl/ftl_io.o 00:02:47.508 CC lib/scsi/task.o 00:02:47.508 CC lib/ftl/ftl_sb.o 00:02:47.508 CC lib/nvmf/ctrlr_bdev.o 00:02:47.508 CC lib/nvmf/subsystem.o 00:02:47.508 CC lib/ftl/ftl_l2p.o 00:02:47.508 CC lib/ftl/ftl_l2p_flat.o 00:02:47.508 CC lib/nvmf/nvmf.o 00:02:47.508 CC lib/ftl/ftl_nv_cache.o 00:02:47.508 CC lib/nvmf/nvmf_rpc.o 00:02:47.508 CC lib/ftl/ftl_band.o 00:02:47.508 CC lib/nvmf/transport.o 00:02:47.508 CC lib/nvmf/tcp.o 00:02:47.508 CC lib/ftl/ftl_writer.o 00:02:47.508 CC lib/ftl/ftl_band_ops.o 00:02:47.508 CC lib/nvmf/stubs.o 00:02:47.508 CC lib/nvmf/mdns_server.o 00:02:47.508 CC lib/ftl/ftl_rq.o 00:02:47.508 CC lib/nvmf/auth.o 00:02:47.508 CC lib/nvmf/rdma.o 00:02:47.508 CC lib/ftl/ftl_reloc.o 00:02:47.508 CC lib/ftl/ftl_l2p_cache.o 00:02:47.508 CC lib/ftl/ftl_p2l.o 00:02:47.508 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.508 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.508 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.508 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:47.508 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:47.508 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:47.766 LIB libspdk_blobfs.a 00:02:47.766 SO libspdk_blobfs.so.10.0 00:02:47.766 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:47.766 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:47.766 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:47.766 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.030 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.030 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.030 SYMLINK libspdk_blobfs.so 00:02:48.030 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.030 CC lib/ftl/utils/ftl_conf.o 00:02:48.030 CC lib/ftl/utils/ftl_md.o 00:02:48.030 CC lib/ftl/utils/ftl_mempool.o 00:02:48.030 CC lib/ftl/utils/ftl_bitmap.o 00:02:48.030 CC lib/ftl/utils/ftl_property.o 00:02:48.030 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:48.030 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:48.030 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:48.030 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:48.030 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:48.030 LIB libspdk_lvol.a 00:02:48.030 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:48.030 SO libspdk_lvol.so.10.0 00:02:48.030 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:48.030 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:48.289 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:48.289 SYMLINK libspdk_lvol.so 00:02:48.289 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:48.289 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:48.289 CC lib/ftl/base/ftl_base_dev.o 00:02:48.289 CC lib/ftl/base/ftl_base_bdev.o 00:02:48.289 CC lib/ftl/ftl_trace.o 00:02:48.547 LIB libspdk_nbd.a 00:02:48.547 SO libspdk_nbd.so.7.0 00:02:48.547 SYMLINK libspdk_nbd.so 00:02:48.805 LIB libspdk_scsi.a 00:02:48.805 SO libspdk_scsi.so.9.0 00:02:48.805 LIB libspdk_ublk.a 00:02:48.805 SO libspdk_ublk.so.3.0 00:02:48.805 SYMLINK libspdk_scsi.so 00:02:48.805 SYMLINK libspdk_ublk.so 00:02:49.062 CC lib/iscsi/conn.o 00:02:49.062 CC lib/vhost/vhost.o 00:02:49.062 CC lib/iscsi/init_grp.o 00:02:49.062 CC lib/vhost/vhost_rpc.o 00:02:49.062 CC lib/iscsi/iscsi.o 00:02:49.062 CC lib/vhost/vhost_scsi.o 00:02:49.062 CC lib/iscsi/md5.o 00:02:49.062 CC lib/vhost/vhost_blk.o 00:02:49.062 CC lib/iscsi/param.o 00:02:49.062 CC lib/vhost/rte_vhost_user.o 00:02:49.062 CC lib/iscsi/portal_grp.o 00:02:49.063 CC lib/iscsi/tgt_node.o 00:02:49.063 CC lib/iscsi/iscsi_subsystem.o 00:02:49.063 CC lib/iscsi/iscsi_rpc.o 00:02:49.063 CC lib/iscsi/task.o 00:02:49.320 LIB libspdk_ftl.a 00:02:49.579 SO libspdk_ftl.so.9.0 00:02:50.145 SYMLINK libspdk_ftl.so 00:02:50.403 LIB libspdk_vhost.a 00:02:50.403 SO libspdk_vhost.so.8.0 00:02:50.661 SYMLINK libspdk_vhost.so 00:02:50.920 LIB libspdk_nvmf.a 00:02:50.920 LIB libspdk_iscsi.a 00:02:50.920 SO libspdk_nvmf.so.18.1 00:02:50.920 SO libspdk_iscsi.so.8.0 00:02:51.178 SYMLINK libspdk_iscsi.so 00:02:51.178 SYMLINK libspdk_nvmf.so 00:02:51.436 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.436 CC module/keyring/linux/keyring.o 00:02:51.436 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.436 CC module/keyring/linux/keyring_rpc.o 00:02:51.436 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.436 CC module/keyring/file/keyring.o 00:02:51.436 CC module/accel/error/accel_error.o 00:02:51.436 CC module/accel/ioat/accel_ioat.o 00:02:51.436 CC module/blob/bdev/blob_bdev.o 00:02:51.436 CC module/accel/dsa/accel_dsa.o 00:02:51.436 CC module/sock/posix/posix.o 00:02:51.436 CC module/keyring/file/keyring_rpc.o 00:02:51.436 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.436 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.436 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.436 CC module/accel/error/accel_error_rpc.o 00:02:51.436 CC module/accel/iaa/accel_iaa.o 00:02:51.436 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.694 LIB libspdk_env_dpdk_rpc.a 00:02:51.694 SO libspdk_env_dpdk_rpc.so.6.0 00:02:51.694 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.694 LIB libspdk_keyring_linux.a 00:02:51.694 LIB libspdk_keyring_file.a 00:02:51.694 LIB libspdk_scheduler_gscheduler.a 00:02:51.694 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.694 SO libspdk_keyring_linux.so.1.0 00:02:51.694 SO libspdk_keyring_file.so.1.0 00:02:51.694 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.694 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.694 LIB libspdk_accel_error.a 00:02:51.694 LIB libspdk_accel_ioat.a 00:02:51.694 LIB libspdk_scheduler_dynamic.a 00:02:51.694 SO libspdk_accel_error.so.2.0 00:02:51.694 LIB libspdk_accel_iaa.a 00:02:51.694 SYMLINK libspdk_keyring_linux.so 00:02:51.694 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.694 SO libspdk_accel_ioat.so.6.0 00:02:51.694 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.694 SYMLINK libspdk_keyring_file.so 00:02:51.952 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.952 SO libspdk_accel_iaa.so.3.0 00:02:51.952 SYMLINK libspdk_accel_error.so 00:02:51.952 SYMLINK libspdk_accel_ioat.so 00:02:51.952 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.952 LIB libspdk_accel_dsa.a 00:02:51.952 LIB libspdk_blob_bdev.a 00:02:51.952 SYMLINK libspdk_accel_iaa.so 00:02:51.952 SO libspdk_accel_dsa.so.5.0 00:02:51.952 SO libspdk_blob_bdev.so.11.0 00:02:51.952 SYMLINK libspdk_accel_dsa.so 00:02:51.952 SYMLINK libspdk_blob_bdev.so 00:02:52.211 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.211 CC module/bdev/delay/vbdev_delay.o 00:02:52.211 CC module/bdev/null/bdev_null.o 00:02:52.211 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.211 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.211 CC module/bdev/null/bdev_null_rpc.o 00:02:52.211 CC module/bdev/gpt/gpt.o 00:02:52.211 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.211 CC module/bdev/aio/bdev_aio.o 00:02:52.211 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.211 CC module/bdev/lvol/vbdev_lvol.o 00:02:52.211 CC module/bdev/nvme/bdev_nvme.o 00:02:52.211 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.211 CC module/bdev/split/vbdev_split.o 00:02:52.211 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.211 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.211 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.211 CC module/bdev/raid/bdev_raid.o 00:02:52.211 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.211 CC module/bdev/malloc/bdev_malloc.o 00:02:52.211 CC module/bdev/nvme/nvme_rpc.o 00:02:52.211 CC module/bdev/aio/bdev_aio_rpc.o 00:02:52.211 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.211 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.211 CC module/bdev/iscsi/bdev_iscsi.o 00:02:52.211 CC module/bdev/error/vbdev_error.o 00:02:52.211 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.211 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.211 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:52.211 CC module/bdev/raid/bdev_raid_sb.o 00:02:52.211 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:52.211 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.211 CC module/bdev/nvme/vbdev_opal.o 00:02:52.211 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:52.211 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.211 CC module/bdev/raid/raid0.o 00:02:52.211 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:52.211 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:52.211 CC module/bdev/ftl/bdev_ftl.o 00:02:52.211 CC module/bdev/raid/raid1.o 00:02:52.211 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:52.211 CC module/bdev/raid/concat.o 00:02:52.777 LIB libspdk_blobfs_bdev.a 00:02:52.777 SO libspdk_blobfs_bdev.so.6.0 00:02:52.777 LIB libspdk_bdev_split.a 00:02:52.777 SO libspdk_bdev_split.so.6.0 00:02:52.777 SYMLINK libspdk_blobfs_bdev.so 00:02:52.777 LIB libspdk_bdev_null.a 00:02:52.777 LIB libspdk_bdev_aio.a 00:02:52.777 LIB libspdk_sock_posix.a 00:02:52.777 SO libspdk_bdev_aio.so.6.0 00:02:52.777 SO libspdk_bdev_null.so.6.0 00:02:52.777 SO libspdk_sock_posix.so.6.0 00:02:52.777 LIB libspdk_bdev_gpt.a 00:02:52.777 SYMLINK libspdk_bdev_split.so 00:02:52.777 LIB libspdk_bdev_ftl.a 00:02:52.777 SO libspdk_bdev_gpt.so.6.0 00:02:52.777 LIB libspdk_bdev_error.a 00:02:52.777 SO libspdk_bdev_ftl.so.6.0 00:02:52.777 SYMLINK libspdk_bdev_null.so 00:02:52.777 SYMLINK libspdk_bdev_aio.so 00:02:52.777 LIB libspdk_bdev_passthru.a 00:02:52.777 SO libspdk_bdev_error.so.6.0 00:02:52.777 SYMLINK libspdk_sock_posix.so 00:02:52.777 SO libspdk_bdev_passthru.so.6.0 00:02:52.777 SYMLINK libspdk_bdev_gpt.so 00:02:52.777 SYMLINK libspdk_bdev_ftl.so 00:02:52.777 SYMLINK libspdk_bdev_error.so 00:02:52.777 LIB libspdk_bdev_zone_block.a 00:02:52.777 SYMLINK libspdk_bdev_passthru.so 00:02:53.036 LIB libspdk_bdev_iscsi.a 00:02:53.036 SO libspdk_bdev_zone_block.so.6.0 00:02:53.036 LIB libspdk_bdev_malloc.a 00:02:53.036 SO libspdk_bdev_iscsi.so.6.0 00:02:53.036 LIB libspdk_bdev_delay.a 00:02:53.036 SO libspdk_bdev_malloc.so.6.0 00:02:53.036 SO libspdk_bdev_delay.so.6.0 00:02:53.036 SYMLINK libspdk_bdev_zone_block.so 00:02:53.036 SYMLINK libspdk_bdev_iscsi.so 00:02:53.036 SYMLINK libspdk_bdev_malloc.so 00:02:53.036 SYMLINK libspdk_bdev_delay.so 00:02:53.036 LIB libspdk_bdev_lvol.a 00:02:53.036 SO libspdk_bdev_lvol.so.6.0 00:02:53.036 LIB libspdk_bdev_virtio.a 00:02:53.294 SO libspdk_bdev_virtio.so.6.0 00:02:53.294 SYMLINK libspdk_bdev_lvol.so 00:02:53.294 SYMLINK libspdk_bdev_virtio.so 00:02:53.553 LIB libspdk_bdev_raid.a 00:02:53.553 SO libspdk_bdev_raid.so.6.0 00:02:53.812 SYMLINK libspdk_bdev_raid.so 00:02:55.216 LIB libspdk_bdev_nvme.a 00:02:55.216 SO libspdk_bdev_nvme.so.7.0 00:02:55.474 SYMLINK libspdk_bdev_nvme.so 00:02:55.733 CC module/event/subsystems/vmd/vmd.o 00:02:55.733 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.733 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.733 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.733 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.733 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.733 CC module/event/subsystems/sock/sock.o 00:02:55.733 CC module/event/subsystems/keyring/keyring.o 00:02:55.733 LIB libspdk_event_keyring.a 00:02:55.733 LIB libspdk_event_vhost_blk.a 00:02:55.733 LIB libspdk_event_scheduler.a 00:02:55.733 LIB libspdk_event_vmd.a 00:02:55.733 LIB libspdk_event_sock.a 00:02:55.733 SO libspdk_event_keyring.so.1.0 00:02:55.993 LIB libspdk_event_iobuf.a 00:02:55.993 SO libspdk_event_vhost_blk.so.3.0 00:02:55.993 SO libspdk_event_scheduler.so.4.0 00:02:55.993 SO libspdk_event_vmd.so.6.0 00:02:55.993 SO libspdk_event_sock.so.5.0 00:02:55.993 SO libspdk_event_iobuf.so.3.0 00:02:55.993 SYMLINK libspdk_event_keyring.so 00:02:55.993 SYMLINK libspdk_event_vhost_blk.so 00:02:55.993 SYMLINK libspdk_event_scheduler.so 00:02:55.993 SYMLINK libspdk_event_sock.so 00:02:55.993 SYMLINK libspdk_event_vmd.so 00:02:55.993 SYMLINK libspdk_event_iobuf.so 00:02:56.251 CC module/event/subsystems/accel/accel.o 00:02:56.251 LIB libspdk_event_accel.a 00:02:56.251 SO libspdk_event_accel.so.6.0 00:02:56.251 SYMLINK libspdk_event_accel.so 00:02:56.510 CC module/event/subsystems/bdev/bdev.o 00:02:56.768 LIB libspdk_event_bdev.a 00:02:56.768 SO libspdk_event_bdev.so.6.0 00:02:56.768 SYMLINK libspdk_event_bdev.so 00:02:57.025 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.025 CC module/event/subsystems/ublk/ublk.o 00:02:57.025 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.025 CC module/event/subsystems/nbd/nbd.o 00:02:57.025 CC module/event/subsystems/scsi/scsi.o 00:02:57.025 LIB libspdk_event_nbd.a 00:02:57.025 LIB libspdk_event_ublk.a 00:02:57.025 LIB libspdk_event_scsi.a 00:02:57.025 SO libspdk_event_nbd.so.6.0 00:02:57.025 SO libspdk_event_ublk.so.3.0 00:02:57.025 SO libspdk_event_scsi.so.6.0 00:02:57.282 SYMLINK libspdk_event_ublk.so 00:02:57.282 SYMLINK libspdk_event_nbd.so 00:02:57.282 SYMLINK libspdk_event_scsi.so 00:02:57.282 LIB libspdk_event_nvmf.a 00:02:57.282 SO libspdk_event_nvmf.so.6.0 00:02:57.282 SYMLINK libspdk_event_nvmf.so 00:02:57.282 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.282 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.541 LIB libspdk_event_vhost_scsi.a 00:02:57.541 LIB libspdk_event_iscsi.a 00:02:57.541 SO libspdk_event_iscsi.so.6.0 00:02:57.541 SO libspdk_event_vhost_scsi.so.3.0 00:02:57.541 SYMLINK libspdk_event_vhost_scsi.so 00:02:57.541 SYMLINK libspdk_event_iscsi.so 00:02:57.799 SO libspdk.so.6.0 00:02:57.799 SYMLINK libspdk.so 00:02:57.799 CC app/trace_record/trace_record.o 00:02:57.799 CXX app/trace/trace.o 00:02:57.799 CC app/spdk_nvme_perf/perf.o 00:02:57.799 CC app/spdk_nvme_identify/identify.o 00:02:57.799 CC app/spdk_top/spdk_top.o 00:02:57.799 CC app/spdk_lspci/spdk_lspci.o 00:02:57.799 TEST_HEADER include/spdk/accel.h 00:02:57.799 TEST_HEADER include/spdk/accel_module.h 00:02:57.799 TEST_HEADER include/spdk/assert.h 00:02:57.799 TEST_HEADER include/spdk/base64.h 00:02:57.799 TEST_HEADER include/spdk/barrier.h 00:02:57.799 CC test/rpc_client/rpc_client_test.o 00:02:57.799 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.799 TEST_HEADER include/spdk/bdev.h 00:02:57.799 TEST_HEADER include/spdk/bdev_module.h 00:02:57.799 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.799 TEST_HEADER include/spdk/bit_array.h 00:02:57.799 TEST_HEADER include/spdk/bit_pool.h 00:02:57.799 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.799 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.799 TEST_HEADER include/spdk/blobfs.h 00:02:57.799 TEST_HEADER include/spdk/blob.h 00:02:57.799 TEST_HEADER include/spdk/conf.h 00:02:57.799 TEST_HEADER include/spdk/config.h 00:02:57.799 TEST_HEADER include/spdk/crc16.h 00:02:57.799 TEST_HEADER include/spdk/cpuset.h 00:02:57.799 TEST_HEADER include/spdk/crc32.h 00:02:57.799 TEST_HEADER include/spdk/crc64.h 00:02:57.799 TEST_HEADER include/spdk/dif.h 00:02:57.799 TEST_HEADER include/spdk/dma.h 00:02:57.799 TEST_HEADER include/spdk/endian.h 00:02:57.799 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.799 TEST_HEADER include/spdk/env.h 00:02:57.799 TEST_HEADER include/spdk/event.h 00:02:57.799 TEST_HEADER include/spdk/fd.h 00:02:57.799 TEST_HEADER include/spdk/fd_group.h 00:02:57.799 TEST_HEADER include/spdk/file.h 00:02:57.799 TEST_HEADER include/spdk/ftl.h 00:02:57.799 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.799 TEST_HEADER include/spdk/hexlify.h 00:02:57.799 TEST_HEADER include/spdk/histogram_data.h 00:02:57.799 TEST_HEADER include/spdk/idxd.h 00:02:57.799 TEST_HEADER include/spdk/init.h 00:02:57.799 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.799 TEST_HEADER include/spdk/ioat.h 00:02:57.799 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.799 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.799 TEST_HEADER include/spdk/json.h 00:02:57.799 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.799 TEST_HEADER include/spdk/keyring.h 00:02:57.799 TEST_HEADER include/spdk/keyring_module.h 00:02:57.799 TEST_HEADER include/spdk/log.h 00:02:57.799 TEST_HEADER include/spdk/likely.h 00:02:57.799 TEST_HEADER include/spdk/lvol.h 00:02:57.799 TEST_HEADER include/spdk/memory.h 00:02:57.799 TEST_HEADER include/spdk/mmio.h 00:02:57.799 TEST_HEADER include/spdk/nbd.h 00:02:57.799 TEST_HEADER include/spdk/notify.h 00:02:57.799 TEST_HEADER include/spdk/nvme.h 00:02:57.799 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.799 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.799 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.799 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.799 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.065 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.065 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.065 TEST_HEADER include/spdk/nvmf.h 00:02:58.065 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.065 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.065 TEST_HEADER include/spdk/opal.h 00:02:58.065 TEST_HEADER include/spdk/opal_spec.h 00:02:58.065 TEST_HEADER include/spdk/pci_ids.h 00:02:58.065 TEST_HEADER include/spdk/pipe.h 00:02:58.065 TEST_HEADER include/spdk/queue.h 00:02:58.065 TEST_HEADER include/spdk/reduce.h 00:02:58.065 TEST_HEADER include/spdk/rpc.h 00:02:58.065 TEST_HEADER include/spdk/scheduler.h 00:02:58.065 TEST_HEADER include/spdk/scsi.h 00:02:58.065 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.065 TEST_HEADER include/spdk/sock.h 00:02:58.065 TEST_HEADER include/spdk/stdinc.h 00:02:58.065 TEST_HEADER include/spdk/string.h 00:02:58.065 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.065 TEST_HEADER include/spdk/thread.h 00:02:58.065 TEST_HEADER include/spdk/trace.h 00:02:58.065 TEST_HEADER include/spdk/trace_parser.h 00:02:58.065 TEST_HEADER include/spdk/ublk.h 00:02:58.065 TEST_HEADER include/spdk/tree.h 00:02:58.065 TEST_HEADER include/spdk/util.h 00:02:58.065 TEST_HEADER include/spdk/version.h 00:02:58.065 TEST_HEADER include/spdk/uuid.h 00:02:58.065 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.065 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.065 TEST_HEADER include/spdk/vhost.h 00:02:58.065 TEST_HEADER include/spdk/vmd.h 00:02:58.065 TEST_HEADER include/spdk/xor.h 00:02:58.065 TEST_HEADER include/spdk/zipf.h 00:02:58.065 CXX test/cpp_headers/accel.o 00:02:58.065 CXX test/cpp_headers/accel_module.o 00:02:58.065 CXX test/cpp_headers/barrier.o 00:02:58.065 CXX test/cpp_headers/assert.o 00:02:58.065 CXX test/cpp_headers/base64.o 00:02:58.065 CXX test/cpp_headers/bdev.o 00:02:58.065 CXX test/cpp_headers/bdev_module.o 00:02:58.065 CXX test/cpp_headers/bdev_zone.o 00:02:58.065 CXX test/cpp_headers/bit_array.o 00:02:58.065 CXX test/cpp_headers/bit_pool.o 00:02:58.065 CXX test/cpp_headers/blob_bdev.o 00:02:58.065 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.065 CXX test/cpp_headers/blobfs.o 00:02:58.065 CC app/nvmf_tgt/nvmf_main.o 00:02:58.065 CXX test/cpp_headers/blob.o 00:02:58.065 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.065 CXX test/cpp_headers/conf.o 00:02:58.065 CXX test/cpp_headers/config.o 00:02:58.065 CXX test/cpp_headers/cpuset.o 00:02:58.065 CC app/spdk_dd/spdk_dd.o 00:02:58.065 CXX test/cpp_headers/crc16.o 00:02:58.065 CC app/spdk_tgt/spdk_tgt.o 00:02:58.065 CC examples/util/zipf/zipf.o 00:02:58.065 CC examples/ioat/verify/verify.o 00:02:58.065 CXX test/cpp_headers/crc32.o 00:02:58.065 CC test/app/jsoncat/jsoncat.o 00:02:58.065 CC test/thread/poller_perf/poller_perf.o 00:02:58.065 CC examples/ioat/perf/perf.o 00:02:58.065 CC test/app/histogram_perf/histogram_perf.o 00:02:58.065 CC test/env/pci/pci_ut.o 00:02:58.065 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.065 CC test/env/vtophys/vtophys.o 00:02:58.065 CC app/fio/nvme/fio_plugin.o 00:02:58.065 CC test/app/stub/stub.o 00:02:58.065 CC test/env/memory/memory_ut.o 00:02:58.065 CC test/dma/test_dma/test_dma.o 00:02:58.065 CC app/fio/bdev/fio_plugin.o 00:02:58.065 CC test/app/bdev_svc/bdev_svc.o 00:02:58.325 LINK spdk_lspci 00:02:58.325 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.325 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.325 LINK rpc_client_test 00:02:58.325 LINK jsoncat 00:02:58.325 LINK spdk_nvme_discover 00:02:58.325 LINK poller_perf 00:02:58.325 LINK interrupt_tgt 00:02:58.325 LINK histogram_perf 00:02:58.325 LINK zipf 00:02:58.325 LINK vtophys 00:02:58.325 CXX test/cpp_headers/crc64.o 00:02:58.325 CXX test/cpp_headers/dif.o 00:02:58.325 CXX test/cpp_headers/dma.o 00:02:58.325 CXX test/cpp_headers/endian.o 00:02:58.325 LINK nvmf_tgt 00:02:58.325 CXX test/cpp_headers/env_dpdk.o 00:02:58.325 CXX test/cpp_headers/env.o 00:02:58.325 LINK env_dpdk_post_init 00:02:58.325 CXX test/cpp_headers/event.o 00:02:58.325 CXX test/cpp_headers/fd_group.o 00:02:58.325 CXX test/cpp_headers/fd.o 00:02:58.325 CXX test/cpp_headers/file.o 00:02:58.325 CXX test/cpp_headers/ftl.o 00:02:58.325 LINK iscsi_tgt 00:02:58.587 CXX test/cpp_headers/gpt_spec.o 00:02:58.587 CXX test/cpp_headers/hexlify.o 00:02:58.587 LINK stub 00:02:58.587 CXX test/cpp_headers/histogram_data.o 00:02:58.587 CXX test/cpp_headers/idxd.o 00:02:58.587 LINK spdk_tgt 00:02:58.587 LINK spdk_trace_record 00:02:58.587 CXX test/cpp_headers/idxd_spec.o 00:02:58.587 LINK bdev_svc 00:02:58.587 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.587 LINK verify 00:02:58.587 CXX test/cpp_headers/init.o 00:02:58.587 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:58.587 LINK ioat_perf 00:02:58.587 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:58.587 CXX test/cpp_headers/ioat.o 00:02:58.587 CXX test/cpp_headers/ioat_spec.o 00:02:58.587 CXX test/cpp_headers/iscsi_spec.o 00:02:58.587 CXX test/cpp_headers/json.o 00:02:58.587 CXX test/cpp_headers/jsonrpc.o 00:02:58.851 CXX test/cpp_headers/keyring.o 00:02:58.851 CXX test/cpp_headers/keyring_module.o 00:02:58.851 CXX test/cpp_headers/likely.o 00:02:58.851 CXX test/cpp_headers/log.o 00:02:58.851 CXX test/cpp_headers/lvol.o 00:02:58.851 LINK spdk_trace 00:02:58.851 LINK spdk_dd 00:02:58.851 CXX test/cpp_headers/memory.o 00:02:58.851 CXX test/cpp_headers/mmio.o 00:02:58.851 CXX test/cpp_headers/nbd.o 00:02:58.851 CXX test/cpp_headers/notify.o 00:02:58.851 CXX test/cpp_headers/nvme.o 00:02:58.851 CXX test/cpp_headers/nvme_intel.o 00:02:58.851 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.851 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.851 CXX test/cpp_headers/nvme_spec.o 00:02:58.851 CXX test/cpp_headers/nvme_zns.o 00:02:58.851 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.851 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.851 CXX test/cpp_headers/nvmf.o 00:02:58.851 CXX test/cpp_headers/nvmf_spec.o 00:02:58.851 CXX test/cpp_headers/nvmf_transport.o 00:02:58.851 CXX test/cpp_headers/opal.o 00:02:58.851 CXX test/cpp_headers/opal_spec.o 00:02:58.851 CXX test/cpp_headers/pci_ids.o 00:02:58.851 CXX test/cpp_headers/pipe.o 00:02:59.111 LINK test_dma 00:02:59.111 LINK pci_ut 00:02:59.111 CXX test/cpp_headers/queue.o 00:02:59.111 CXX test/cpp_headers/reduce.o 00:02:59.111 CXX test/cpp_headers/rpc.o 00:02:59.111 CC test/event/event_perf/event_perf.o 00:02:59.111 CC examples/sock/hello_world/hello_sock.o 00:02:59.111 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.111 CXX test/cpp_headers/scheduler.o 00:02:59.111 CC examples/idxd/perf/perf.o 00:02:59.111 CC examples/vmd/led/led.o 00:02:59.111 CXX test/cpp_headers/scsi.o 00:02:59.111 CC examples/thread/thread/thread_ex.o 00:02:59.111 CXX test/cpp_headers/scsi_spec.o 00:02:59.111 CXX test/cpp_headers/sock.o 00:02:59.111 CC test/event/reactor/reactor.o 00:02:59.111 CC test/event/reactor_perf/reactor_perf.o 00:02:59.111 CXX test/cpp_headers/stdinc.o 00:02:59.111 CXX test/cpp_headers/string.o 00:02:59.370 CXX test/cpp_headers/thread.o 00:02:59.370 CXX test/cpp_headers/trace.o 00:02:59.370 CXX test/cpp_headers/trace_parser.o 00:02:59.370 LINK spdk_bdev 00:02:59.370 LINK nvme_fuzz 00:02:59.370 CXX test/cpp_headers/tree.o 00:02:59.370 CC test/event/app_repeat/app_repeat.o 00:02:59.370 CXX test/cpp_headers/ublk.o 00:02:59.370 CXX test/cpp_headers/util.o 00:02:59.370 CXX test/cpp_headers/uuid.o 00:02:59.370 CXX test/cpp_headers/version.o 00:02:59.370 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.370 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.370 CXX test/cpp_headers/vhost.o 00:02:59.370 CXX test/cpp_headers/vmd.o 00:02:59.370 CXX test/cpp_headers/xor.o 00:02:59.370 CXX test/cpp_headers/zipf.o 00:02:59.370 LINK lsvmd 00:02:59.370 LINK event_perf 00:02:59.370 CC test/event/scheduler/scheduler.o 00:02:59.370 LINK mem_callbacks 00:02:59.370 CC app/vhost/vhost.o 00:02:59.370 LINK spdk_nvme 00:02:59.370 LINK led 00:02:59.629 LINK reactor 00:02:59.629 LINK reactor_perf 00:02:59.629 LINK app_repeat 00:02:59.629 LINK vhost_fuzz 00:02:59.629 LINK thread 00:02:59.629 LINK hello_sock 00:02:59.629 CC test/nvme/overhead/overhead.o 00:02:59.629 CC test/nvme/sgl/sgl.o 00:02:59.629 CC test/nvme/startup/startup.o 00:02:59.629 CC test/nvme/reset/reset.o 00:02:59.629 CC test/nvme/e2edp/nvme_dp.o 00:02:59.629 CC test/nvme/err_injection/err_injection.o 00:02:59.629 CC test/nvme/aer/aer.o 00:02:59.629 CC test/nvme/simple_copy/simple_copy.o 00:02:59.629 CC test/nvme/reserve/reserve.o 00:02:59.629 CC test/nvme/connect_stress/connect_stress.o 00:02:59.629 CC test/nvme/compliance/nvme_compliance.o 00:02:59.629 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.629 CC test/nvme/boot_partition/boot_partition.o 00:02:59.629 CC test/blobfs/mkfs/mkfs.o 00:02:59.629 CC test/accel/dif/dif.o 00:02:59.886 CC test/nvme/fdp/fdp.o 00:02:59.886 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.886 CC test/nvme/cuse/cuse.o 00:02:59.886 CC test/lvol/esnap/esnap.o 00:02:59.886 LINK vhost 00:02:59.886 LINK spdk_nvme_perf 00:02:59.886 LINK scheduler 00:02:59.886 LINK spdk_nvme_identify 00:02:59.886 LINK idxd_perf 00:02:59.886 LINK spdk_top 00:02:59.887 LINK err_injection 00:03:00.145 LINK startup 00:03:00.145 LINK mkfs 00:03:00.145 LINK doorbell_aers 00:03:00.145 LINK connect_stress 00:03:00.145 LINK boot_partition 00:03:00.145 LINK sgl 00:03:00.145 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.145 CC examples/nvme/arbitration/arbitration.o 00:03:00.145 LINK simple_copy 00:03:00.145 CC examples/nvme/hello_world/hello_world.o 00:03:00.145 LINK aer 00:03:00.145 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.145 CC examples/nvme/hotplug/hotplug.o 00:03:00.145 CC examples/nvme/reconnect/reconnect.o 00:03:00.145 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.145 CC examples/nvme/abort/abort.o 00:03:00.145 LINK fused_ordering 00:03:00.145 LINK reserve 00:03:00.145 LINK nvme_dp 00:03:00.145 LINK reset 00:03:00.402 LINK nvme_compliance 00:03:00.402 LINK fdp 00:03:00.402 CC examples/accel/perf/accel_perf.o 00:03:00.402 LINK overhead 00:03:00.402 CC examples/blob/hello_world/hello_blob.o 00:03:00.402 CC examples/blob/cli/blobcli.o 00:03:00.402 LINK pmr_persistence 00:03:00.402 LINK cmb_copy 00:03:00.402 LINK dif 00:03:00.402 LINK hello_world 00:03:00.402 LINK memory_ut 00:03:00.660 LINK hotplug 00:03:00.660 LINK arbitration 00:03:00.660 LINK hello_blob 00:03:00.660 LINK abort 00:03:00.660 LINK reconnect 00:03:00.917 CC test/bdev/bdevio/bdevio.o 00:03:00.917 LINK nvme_manage 00:03:00.917 LINK accel_perf 00:03:00.917 LINK blobcli 00:03:01.174 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.174 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.431 LINK bdevio 00:03:01.689 LINK iscsi_fuzz 00:03:01.689 LINK hello_bdev 00:03:01.689 LINK cuse 00:03:02.253 LINK bdevperf 00:03:02.510 CC examples/nvmf/nvmf/nvmf.o 00:03:03.077 LINK nvmf 00:03:06.366 LINK esnap 00:03:06.624 00:03:06.624 real 1m19.125s 00:03:06.624 user 11m21.345s 00:03:06.624 sys 2m25.459s 00:03:06.624 07:29:57 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:06.624 07:29:57 make -- common/autotest_common.sh@10 -- $ set +x 00:03:06.624 ************************************ 00:03:06.624 END TEST make 00:03:06.624 ************************************ 00:03:06.624 07:29:57 -- common/autotest_common.sh@1142 -- $ return 0 00:03:06.624 07:29:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:06.624 07:29:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:06.624 07:29:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:06.624 07:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.624 07:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:06.624 07:29:57 -- pm/common@44 -- $ pid=845165 00:03:06.624 07:29:57 -- pm/common@50 -- $ kill -TERM 845165 00:03:06.624 07:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.624 07:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:06.624 07:29:57 -- pm/common@44 -- $ pid=845167 00:03:06.624 07:29:57 -- pm/common@50 -- $ kill -TERM 845167 00:03:06.624 07:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.624 07:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:06.624 07:29:57 -- pm/common@44 -- $ pid=845168 00:03:06.624 07:29:57 -- pm/common@50 -- $ kill -TERM 845168 00:03:06.624 07:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.624 07:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:06.624 07:29:57 -- pm/common@44 -- $ pid=845199 00:03:06.624 07:29:57 -- pm/common@50 -- $ sudo -E kill -TERM 845199 00:03:06.624 07:29:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:06.624 07:29:57 -- nvmf/common.sh@7 -- # uname -s 00:03:06.624 07:29:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:06.624 07:29:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:06.624 07:29:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:06.624 07:29:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:06.624 07:29:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:06.624 07:29:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:06.624 07:29:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:06.624 07:29:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:06.624 07:29:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:06.624 07:29:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:06.624 07:29:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:06.624 07:29:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:06.624 07:29:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:06.624 07:29:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:06.624 07:29:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:06.624 07:29:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:06.624 07:29:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:06.624 07:29:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:06.624 07:29:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:06.624 07:29:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:06.624 07:29:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.624 07:29:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.624 07:29:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.624 07:29:57 -- paths/export.sh@5 -- # export PATH 00:03:06.624 07:29:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.624 07:29:57 -- nvmf/common.sh@47 -- # : 0 00:03:06.624 07:29:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:06.624 07:29:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:06.624 07:29:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:06.624 07:29:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:06.624 07:29:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:06.625 07:29:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:06.625 07:29:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:06.625 07:29:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:06.625 07:29:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:06.625 07:29:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:06.625 07:29:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:06.625 07:29:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:06.625 07:29:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:06.625 07:29:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:06.625 07:29:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:06.625 07:29:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:06.883 07:29:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:06.883 07:29:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:06.883 07:29:57 -- spdk/autotest.sh@48 -- # udevadm_pid=903759 00:03:06.883 07:29:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:06.883 07:29:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:06.883 07:29:57 -- pm/common@17 -- # local monitor 00:03:06.883 07:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.883 07:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.883 07:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.883 07:29:57 -- pm/common@21 -- # date +%s 00:03:06.883 07:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.883 07:29:57 -- pm/common@21 -- # date +%s 00:03:06.883 07:29:57 -- pm/common@25 -- # sleep 1 00:03:06.883 07:29:57 -- pm/common@21 -- # date +%s 00:03:06.883 07:29:57 -- pm/common@21 -- # date +%s 00:03:06.883 07:29:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021397 00:03:06.883 07:29:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021397 00:03:06.883 07:29:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021397 00:03:06.883 07:29:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021397 00:03:06.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021397_collect-vmstat.pm.log 00:03:06.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021397_collect-cpu-load.pm.log 00:03:06.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021397_collect-cpu-temp.pm.log 00:03:06.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021397_collect-bmc-pm.bmc.pm.log 00:03:07.820 07:29:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:07.820 07:29:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:07.820 07:29:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:07.820 07:29:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.820 07:29:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:07.820 07:29:58 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:07.820 07:29:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.820 07:29:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:07.820 07:29:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.820 07:29:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.820 07:29:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:07.820 07:29:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.820 07:29:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:07.820 07:29:58 -- common/autotest_common.sh@1455 -- # uname 00:03:07.820 07:29:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:07.820 07:29:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:07.820 07:29:58 -- common/autotest_common.sh@1475 -- # uname 00:03:07.820 07:29:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:07.820 07:29:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:07.820 07:29:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:07.820 07:29:58 -- spdk/autotest.sh@72 -- # hash lcov 00:03:07.820 07:29:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:07.820 07:29:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:07.820 --rc lcov_branch_coverage=1 00:03:07.820 --rc lcov_function_coverage=1 00:03:07.820 --rc genhtml_branch_coverage=1 00:03:07.820 --rc genhtml_function_coverage=1 00:03:07.820 --rc genhtml_legend=1 00:03:07.820 --rc geninfo_all_blocks=1 00:03:07.820 ' 00:03:07.820 07:29:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:07.820 --rc lcov_branch_coverage=1 00:03:07.820 --rc lcov_function_coverage=1 00:03:07.820 --rc genhtml_branch_coverage=1 00:03:07.820 --rc genhtml_function_coverage=1 00:03:07.820 --rc genhtml_legend=1 00:03:07.820 --rc geninfo_all_blocks=1 00:03:07.820 ' 00:03:07.820 07:29:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:07.820 --rc lcov_branch_coverage=1 00:03:07.820 --rc lcov_function_coverage=1 00:03:07.820 --rc genhtml_branch_coverage=1 00:03:07.820 --rc genhtml_function_coverage=1 00:03:07.820 --rc genhtml_legend=1 00:03:07.820 --rc geninfo_all_blocks=1 00:03:07.820 --no-external' 00:03:07.820 07:29:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:07.820 --rc lcov_branch_coverage=1 00:03:07.820 --rc lcov_function_coverage=1 00:03:07.820 --rc genhtml_branch_coverage=1 00:03:07.820 --rc genhtml_function_coverage=1 00:03:07.820 --rc genhtml_legend=1 00:03:07.820 --rc geninfo_all_blocks=1 00:03:07.820 --no-external' 00:03:07.820 07:29:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:07.820 lcov: LCOV version 1.14 00:03:07.820 07:29:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:14.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:14.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:14.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:14.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:14.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:36.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:36.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:41.589 07:30:32 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:41.589 07:30:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.589 07:30:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.589 07:30:32 -- spdk/autotest.sh@91 -- # rm -f 00:03:41.589 07:30:32 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.155 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:42.155 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:42.155 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:42.413 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:42.414 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:42.414 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:42.414 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:42.414 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:42.414 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:42.414 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:42.414 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:42.414 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:42.414 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:42.414 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:42.414 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:42.414 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:42.414 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:42.673 07:30:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:42.673 07:30:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:42.673 07:30:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:42.673 07:30:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:42.673 07:30:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.673 07:30:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:42.673 07:30:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:42.673 07:30:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.673 07:30:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.673 07:30:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:42.673 07:30:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.673 07:30:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:42.673 07:30:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:42.673 07:30:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:42.673 07:30:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.673 No valid GPT data, bailing 00:03:42.673 07:30:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.673 07:30:33 -- scripts/common.sh@391 -- # pt= 00:03:42.673 07:30:33 -- scripts/common.sh@392 -- # return 1 00:03:42.673 07:30:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.673 1+0 records in 00:03:42.673 1+0 records out 00:03:42.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00249971 s, 419 MB/s 00:03:42.673 07:30:33 -- spdk/autotest.sh@118 -- # sync 00:03:42.673 07:30:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.673 07:30:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.673 07:30:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.593 07:30:35 -- spdk/autotest.sh@124 -- # uname -s 00:03:44.593 07:30:35 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:44.593 07:30:35 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.593 07:30:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.593 07:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.593 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:03:44.593 ************************************ 00:03:44.593 START TEST setup.sh 00:03:44.593 ************************************ 00:03:44.593 07:30:35 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.593 * Looking for test storage... 00:03:44.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.593 07:30:35 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:44.593 07:30:35 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:44.593 07:30:35 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:44.593 07:30:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.593 07:30:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.593 07:30:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.851 ************************************ 00:03:44.851 START TEST acl 00:03:44.851 ************************************ 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:44.851 * Looking for test storage... 00:03:44.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.851 07:30:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.851 07:30:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.851 07:30:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:44.851 07:30:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:44.851 07:30:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:44.851 07:30:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:44.851 07:30:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:44.851 07:30:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.851 07:30:35 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.226 07:30:37 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:46.226 07:30:37 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:46.226 07:30:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.226 07:30:37 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:46.226 07:30:37 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.226 07:30:37 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.160 Hugepages 00:03:47.160 node hugesize free / total 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 00:03:47.160 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.160 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:47.419 07:30:38 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:47.419 07:30:38 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.419 07:30:38 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.419 07:30:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:47.419 ************************************ 00:03:47.419 START TEST denied 00:03:47.419 ************************************ 00:03:47.419 07:30:38 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:47.419 07:30:38 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:47.419 07:30:38 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:47.419 07:30:38 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:47.419 07:30:38 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.419 07:30:38 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.831 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.831 07:30:39 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.363 00:03:51.363 real 0m3.929s 00:03:51.363 user 0m1.180s 00:03:51.363 sys 0m1.846s 00:03:51.363 07:30:42 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.363 07:30:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:51.363 ************************************ 00:03:51.363 END TEST denied 00:03:51.363 ************************************ 00:03:51.363 07:30:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:51.363 07:30:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:51.363 07:30:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.363 07:30:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.363 07:30:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.363 ************************************ 00:03:51.363 START TEST allowed 00:03:51.363 ************************************ 00:03:51.363 07:30:42 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:51.363 07:30:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:51.363 07:30:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:51.363 07:30:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:51.363 07:30:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.363 07:30:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.889 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.889 07:30:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:53.889 07:30:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:53.889 07:30:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:53.889 07:30:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.889 07:30:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.265 00:03:55.265 real 0m3.752s 00:03:55.265 user 0m0.995s 00:03:55.265 sys 0m1.602s 00:03:55.265 07:30:46 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.265 07:30:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:55.265 ************************************ 00:03:55.265 END TEST allowed 00:03:55.265 ************************************ 00:03:55.265 07:30:46 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:55.265 00:03:55.265 real 0m10.383s 00:03:55.265 user 0m3.217s 00:03:55.265 sys 0m5.170s 00:03:55.265 07:30:46 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.265 07:30:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.265 ************************************ 00:03:55.265 END TEST acl 00:03:55.265 ************************************ 00:03:55.265 07:30:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.265 07:30:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.265 07:30:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.265 07:30:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.265 07:30:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.265 ************************************ 00:03:55.265 START TEST hugepages 00:03:55.265 ************************************ 00:03:55.266 07:30:46 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.266 * Looking for test storage... 00:03:55.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43468580 kB' 'MemAvailable: 46972956 kB' 'Buffers: 2704 kB' 'Cached: 10496012 kB' 'SwapCached: 0 kB' 'Active: 7487304 kB' 'Inactive: 3506596 kB' 'Active(anon): 7092712 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498444 kB' 'Mapped: 205028 kB' 'Shmem: 6597528 kB' 'KReclaimable: 194056 kB' 'Slab: 560584 kB' 'SReclaimable: 194056 kB' 'SUnreclaim: 366528 kB' 'KernelStack: 12816 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 8208480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.266 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:55.267 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.268 07:30:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:55.268 07:30:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.268 07:30:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.268 07:30:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.268 ************************************ 00:03:55.268 START TEST default_setup 00:03:55.268 ************************************ 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.268 07:30:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.206 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.206 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.206 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.466 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.466 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.466 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.466 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.466 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.466 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.410 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45597524 kB' 'MemAvailable: 49101884 kB' 'Buffers: 2704 kB' 'Cached: 10496104 kB' 'SwapCached: 0 kB' 'Active: 7505572 kB' 'Inactive: 3506596 kB' 'Active(anon): 7110980 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516580 kB' 'Mapped: 205188 kB' 'Shmem: 6597620 kB' 'KReclaimable: 194024 kB' 'Slab: 560068 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366044 kB' 'KernelStack: 12784 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8229460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.411 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45597960 kB' 'MemAvailable: 49102320 kB' 'Buffers: 2704 kB' 'Cached: 10496108 kB' 'SwapCached: 0 kB' 'Active: 7505832 kB' 'Inactive: 3506596 kB' 'Active(anon): 7111240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516820 kB' 'Mapped: 205132 kB' 'Shmem: 6597624 kB' 'KReclaimable: 194024 kB' 'Slab: 560056 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366032 kB' 'KernelStack: 12768 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8229480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.412 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.413 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45598636 kB' 'MemAvailable: 49102996 kB' 'Buffers: 2704 kB' 'Cached: 10496112 kB' 'SwapCached: 0 kB' 'Active: 7506300 kB' 'Inactive: 3506596 kB' 'Active(anon): 7111708 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516936 kB' 'Mapped: 205056 kB' 'Shmem: 6597628 kB' 'KReclaimable: 194024 kB' 'Slab: 560040 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366016 kB' 'KernelStack: 12784 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8229500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.414 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.415 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.416 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.678 nr_hugepages=1024 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.678 resv_hugepages=0 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.678 surplus_hugepages=0 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.678 anon_hugepages=0 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.678 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45597832 kB' 'MemAvailable: 49102192 kB' 'Buffers: 2704 kB' 'Cached: 10496144 kB' 'SwapCached: 0 kB' 'Active: 7505720 kB' 'Inactive: 3506596 kB' 'Active(anon): 7111128 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516712 kB' 'Mapped: 205056 kB' 'Shmem: 6597660 kB' 'KReclaimable: 194024 kB' 'Slab: 560040 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366016 kB' 'KernelStack: 12800 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8229520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.679 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.680 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21464884 kB' 'MemUsed: 11412056 kB' 'SwapCached: 0 kB' 'Active: 5101388 kB' 'Inactive: 3264144 kB' 'Active(anon): 4912816 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081224 kB' 'Mapped: 70804 kB' 'AnonPages: 287536 kB' 'Shmem: 4628508 kB' 'KernelStack: 6792 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114516 kB' 'Slab: 307772 kB' 'SReclaimable: 114516 kB' 'SUnreclaim: 193256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.681 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.682 node0=1024 expecting 1024 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.682 00:03:57.682 real 0m2.309s 00:03:57.682 user 0m0.586s 00:03:57.682 sys 0m0.835s 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.682 07:30:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:57.682 ************************************ 00:03:57.682 END TEST default_setup 00:03:57.682 ************************************ 00:03:57.682 07:30:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.682 07:30:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:57.682 07:30:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.682 07:30:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.682 07:30:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.682 ************************************ 00:03:57.682 START TEST per_node_1G_alloc 00:03:57.682 ************************************ 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.682 07:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.064 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.064 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.064 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.064 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.064 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.064 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.064 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.065 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.065 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.065 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.065 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.065 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.065 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.065 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.065 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.065 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.065 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45584592 kB' 'MemAvailable: 49088952 kB' 'Buffers: 2704 kB' 'Cached: 10496216 kB' 'SwapCached: 0 kB' 'Active: 7507144 kB' 'Inactive: 3506596 kB' 'Active(anon): 7112552 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518608 kB' 'Mapped: 205652 kB' 'Shmem: 6597732 kB' 'KReclaimable: 194024 kB' 'Slab: 560204 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366180 kB' 'KernelStack: 12816 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8231060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.065 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45586836 kB' 'MemAvailable: 49091196 kB' 'Buffers: 2704 kB' 'Cached: 10496224 kB' 'SwapCached: 0 kB' 'Active: 7509900 kB' 'Inactive: 3506596 kB' 'Active(anon): 7115308 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521428 kB' 'Mapped: 205596 kB' 'Shmem: 6597740 kB' 'KReclaimable: 194024 kB' 'Slab: 560212 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366188 kB' 'KernelStack: 12800 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8233984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.066 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.067 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45587256 kB' 'MemAvailable: 49091616 kB' 'Buffers: 2704 kB' 'Cached: 10496236 kB' 'SwapCached: 0 kB' 'Active: 7511640 kB' 'Inactive: 3506596 kB' 'Active(anon): 7117048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523100 kB' 'Mapped: 206000 kB' 'Shmem: 6597752 kB' 'KReclaimable: 194024 kB' 'Slab: 560188 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366164 kB' 'KernelStack: 12784 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8235732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196020 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.068 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.069 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.070 nr_hugepages=1024 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.070 resv_hugepages=0 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.070 surplus_hugepages=0 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.070 anon_hugepages=0 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45587256 kB' 'MemAvailable: 49091616 kB' 'Buffers: 2704 kB' 'Cached: 10496240 kB' 'SwapCached: 0 kB' 'Active: 7506984 kB' 'Inactive: 3506596 kB' 'Active(anon): 7112392 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518436 kB' 'Mapped: 205564 kB' 'Shmem: 6597756 kB' 'KReclaimable: 194024 kB' 'Slab: 560188 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366164 kB' 'KernelStack: 12768 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8231256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.070 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.071 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22511220 kB' 'MemUsed: 10365720 kB' 'SwapCached: 0 kB' 'Active: 5101424 kB' 'Inactive: 3264144 kB' 'Active(anon): 4912852 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081284 kB' 'Mapped: 71304 kB' 'AnonPages: 287312 kB' 'Shmem: 4628568 kB' 'KernelStack: 6728 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114516 kB' 'Slab: 307816 kB' 'SReclaimable: 114516 kB' 'SUnreclaim: 193300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.072 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.073 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23076472 kB' 'MemUsed: 4588280 kB' 'SwapCached: 0 kB' 'Active: 2410460 kB' 'Inactive: 242452 kB' 'Active(anon): 2204440 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2417660 kB' 'Mapped: 134260 kB' 'AnonPages: 235612 kB' 'Shmem: 1969188 kB' 'KernelStack: 6056 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79508 kB' 'Slab: 252372 kB' 'SReclaimable: 79508 kB' 'SUnreclaim: 172864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.074 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.075 node0=512 expecting 512 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.075 node1=512 expecting 512 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.075 00:03:59.075 real 0m1.454s 00:03:59.075 user 0m0.611s 00:03:59.075 sys 0m0.806s 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.075 07:30:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.075 ************************************ 00:03:59.075 END TEST per_node_1G_alloc 00:03:59.075 ************************************ 00:03:59.075 07:30:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.075 07:30:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:59.075 07:30:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.075 07:30:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.075 07:30:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.075 ************************************ 00:03:59.075 START TEST even_2G_alloc 00:03:59.075 ************************************ 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.075 07:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.457 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.457 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.457 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.457 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.457 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.457 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.457 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.457 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.457 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.457 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.457 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.457 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.457 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.457 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.457 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.457 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.457 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.457 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45595444 kB' 'MemAvailable: 49099804 kB' 'Buffers: 2704 kB' 'Cached: 10496360 kB' 'SwapCached: 0 kB' 'Active: 7507408 kB' 'Inactive: 3506596 kB' 'Active(anon): 7112816 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518360 kB' 'Mapped: 205124 kB' 'Shmem: 6597876 kB' 'KReclaimable: 194024 kB' 'Slab: 560212 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366188 kB' 'KernelStack: 12848 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8229972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.458 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.459 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45598488 kB' 'MemAvailable: 49102848 kB' 'Buffers: 2704 kB' 'Cached: 10496360 kB' 'SwapCached: 0 kB' 'Active: 7507236 kB' 'Inactive: 3506596 kB' 'Active(anon): 7112644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518140 kB' 'Mapped: 205032 kB' 'Shmem: 6597876 kB' 'KReclaimable: 194024 kB' 'Slab: 560216 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366192 kB' 'KernelStack: 12832 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8229992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.460 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.461 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45598796 kB' 'MemAvailable: 49103156 kB' 'Buffers: 2704 kB' 'Cached: 10496380 kB' 'SwapCached: 0 kB' 'Active: 7506936 kB' 'Inactive: 3506596 kB' 'Active(anon): 7112344 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517892 kB' 'Mapped: 205092 kB' 'Shmem: 6597896 kB' 'KReclaimable: 194024 kB' 'Slab: 560300 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366276 kB' 'KernelStack: 12832 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8230012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.462 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.463 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.464 nr_hugepages=1024 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.464 resv_hugepages=0 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.464 surplus_hugepages=0 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.464 anon_hugepages=0 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.464 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45598796 kB' 'MemAvailable: 49103156 kB' 'Buffers: 2704 kB' 'Cached: 10496404 kB' 'SwapCached: 0 kB' 'Active: 7506984 kB' 'Inactive: 3506596 kB' 'Active(anon): 7112392 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517884 kB' 'Mapped: 205092 kB' 'Shmem: 6597920 kB' 'KReclaimable: 194024 kB' 'Slab: 560300 kB' 'SReclaimable: 194024 kB' 'SUnreclaim: 366276 kB' 'KernelStack: 12832 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8230036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.465 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.466 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22518560 kB' 'MemUsed: 10358380 kB' 'SwapCached: 0 kB' 'Active: 5100356 kB' 'Inactive: 3264144 kB' 'Active(anon): 4911784 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081388 kB' 'Mapped: 70836 kB' 'AnonPages: 286344 kB' 'Shmem: 4628672 kB' 'KernelStack: 6744 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114516 kB' 'Slab: 307764 kB' 'SReclaimable: 114516 kB' 'SUnreclaim: 193248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.467 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.468 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23080488 kB' 'MemUsed: 4584264 kB' 'SwapCached: 0 kB' 'Active: 2406676 kB' 'Inactive: 242452 kB' 'Active(anon): 2200656 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2417760 kB' 'Mapped: 134256 kB' 'AnonPages: 231552 kB' 'Shmem: 1969288 kB' 'KernelStack: 6088 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79508 kB' 'Slab: 252536 kB' 'SReclaimable: 79508 kB' 'SUnreclaim: 173028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.469 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.470 node0=512 expecting 512 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:00.470 node1=512 expecting 512 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.470 00:04:00.470 real 0m1.419s 00:04:00.470 user 0m0.609s 00:04:00.470 sys 0m0.770s 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.470 07:30:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.470 ************************************ 00:04:00.470 END TEST even_2G_alloc 00:04:00.470 ************************************ 00:04:00.470 07:30:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.470 07:30:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.470 07:30:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.470 07:30:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.470 07:30:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.730 ************************************ 00:04:00.730 START TEST odd_alloc 00:04:00.730 ************************************ 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.730 07:30:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.671 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.671 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.671 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.671 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.671 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.671 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.671 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.671 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.671 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.671 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.671 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.671 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.671 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.671 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.671 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.671 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.671 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45582080 kB' 'MemAvailable: 49086432 kB' 'Buffers: 2704 kB' 'Cached: 10496488 kB' 'SwapCached: 0 kB' 'Active: 7504676 kB' 'Inactive: 3506596 kB' 'Active(anon): 7110084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515272 kB' 'Mapped: 204204 kB' 'Shmem: 6598004 kB' 'KReclaimable: 194008 kB' 'Slab: 560216 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366208 kB' 'KernelStack: 12976 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8217148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196304 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.941 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.942 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45583340 kB' 'MemAvailable: 49087692 kB' 'Buffers: 2704 kB' 'Cached: 10496488 kB' 'SwapCached: 0 kB' 'Active: 7504656 kB' 'Inactive: 3506596 kB' 'Active(anon): 7110064 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515188 kB' 'Mapped: 204208 kB' 'Shmem: 6598004 kB' 'KReclaimable: 194008 kB' 'Slab: 560248 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366240 kB' 'KernelStack: 12944 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8217168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196368 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.943 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45585192 kB' 'MemAvailable: 49089544 kB' 'Buffers: 2704 kB' 'Cached: 10496488 kB' 'SwapCached: 0 kB' 'Active: 7505320 kB' 'Inactive: 3506596 kB' 'Active(anon): 7110728 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515424 kB' 'Mapped: 204208 kB' 'Shmem: 6598004 kB' 'KReclaimable: 194008 kB' 'Slab: 560240 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366232 kB' 'KernelStack: 13088 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8214828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.944 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.945 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:01.946 nr_hugepages=1025 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.946 resv_hugepages=0 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.946 surplus_hugepages=0 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.946 anon_hugepages=0 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45586396 kB' 'MemAvailable: 49090748 kB' 'Buffers: 2704 kB' 'Cached: 10496528 kB' 'SwapCached: 0 kB' 'Active: 7503672 kB' 'Inactive: 3506596 kB' 'Active(anon): 7109080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514204 kB' 'Mapped: 204156 kB' 'Shmem: 6598044 kB' 'KReclaimable: 194008 kB' 'Slab: 560432 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366424 kB' 'KernelStack: 12688 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8214848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.946 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.947 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22513804 kB' 'MemUsed: 10363136 kB' 'SwapCached: 0 kB' 'Active: 5099324 kB' 'Inactive: 3264144 kB' 'Active(anon): 4910752 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081448 kB' 'Mapped: 70116 kB' 'AnonPages: 285140 kB' 'Shmem: 4628732 kB' 'KernelStack: 6744 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114500 kB' 'Slab: 307776 kB' 'SReclaimable: 114500 kB' 'SUnreclaim: 193276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.948 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23073940 kB' 'MemUsed: 4590812 kB' 'SwapCached: 0 kB' 'Active: 2404132 kB' 'Inactive: 242452 kB' 'Active(anon): 2198112 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2417804 kB' 'Mapped: 134036 kB' 'AnonPages: 228872 kB' 'Shmem: 1969332 kB' 'KernelStack: 6024 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79508 kB' 'Slab: 252656 kB' 'SReclaimable: 79508 kB' 'SUnreclaim: 173148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.949 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.950 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:01.951 node0=512 expecting 513 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:01.951 node1=513 expecting 512 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:01.951 00:04:01.951 real 0m1.452s 00:04:01.951 user 0m0.627s 00:04:01.951 sys 0m0.787s 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.951 07:30:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.951 ************************************ 00:04:01.951 END TEST odd_alloc 00:04:01.951 ************************************ 00:04:02.211 07:30:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.211 07:30:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:02.211 07:30:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.211 07:30:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.211 07:30:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.211 ************************************ 00:04:02.211 START TEST custom_alloc 00:04:02.211 ************************************ 00:04:02.211 07:30:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:02.211 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:02.211 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:02.211 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.212 07:30:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.152 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.153 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.153 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.153 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.153 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.153 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.153 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.153 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.153 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.153 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.153 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.153 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.153 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.153 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.153 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.153 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.153 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44520652 kB' 'MemAvailable: 48025004 kB' 'Buffers: 2704 kB' 'Cached: 10496620 kB' 'SwapCached: 0 kB' 'Active: 7503956 kB' 'Inactive: 3506596 kB' 'Active(anon): 7109364 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514428 kB' 'Mapped: 204196 kB' 'Shmem: 6598136 kB' 'KReclaimable: 194008 kB' 'Slab: 560332 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366324 kB' 'KernelStack: 12784 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8215048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.153 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.432 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44525436 kB' 'MemAvailable: 48029788 kB' 'Buffers: 2704 kB' 'Cached: 10496624 kB' 'SwapCached: 0 kB' 'Active: 7503276 kB' 'Inactive: 3506596 kB' 'Active(anon): 7108684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513676 kB' 'Mapped: 204084 kB' 'Shmem: 6598140 kB' 'KReclaimable: 194008 kB' 'Slab: 560292 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366284 kB' 'KernelStack: 12752 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8215052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.433 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.434 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44526076 kB' 'MemAvailable: 48030428 kB' 'Buffers: 2704 kB' 'Cached: 10496640 kB' 'SwapCached: 0 kB' 'Active: 7503580 kB' 'Inactive: 3506596 kB' 'Active(anon): 7108988 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514064 kB' 'Mapped: 204164 kB' 'Shmem: 6598156 kB' 'KReclaimable: 194008 kB' 'Slab: 560332 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366324 kB' 'KernelStack: 12768 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8215072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.435 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.436 nr_hugepages=1536 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.436 resv_hugepages=0 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.436 surplus_hugepages=0 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.436 anon_hugepages=0 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.436 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44525320 kB' 'MemAvailable: 48029672 kB' 'Buffers: 2704 kB' 'Cached: 10496660 kB' 'SwapCached: 0 kB' 'Active: 7503460 kB' 'Inactive: 3506596 kB' 'Active(anon): 7108868 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513872 kB' 'Mapped: 204164 kB' 'Shmem: 6598176 kB' 'KReclaimable: 194008 kB' 'Slab: 560332 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366324 kB' 'KernelStack: 12752 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8215092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.437 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22486840 kB' 'MemUsed: 10390100 kB' 'SwapCached: 0 kB' 'Active: 5100104 kB' 'Inactive: 3264144 kB' 'Active(anon): 4911532 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081576 kB' 'Mapped: 70128 kB' 'AnonPages: 285880 kB' 'Shmem: 4628860 kB' 'KernelStack: 6808 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114500 kB' 'Slab: 307716 kB' 'SReclaimable: 114500 kB' 'SUnreclaim: 193216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.438 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.439 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22038480 kB' 'MemUsed: 5626272 kB' 'SwapCached: 0 kB' 'Active: 2403524 kB' 'Inactive: 242452 kB' 'Active(anon): 2197504 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2417812 kB' 'Mapped: 134036 kB' 'AnonPages: 228188 kB' 'Shmem: 1969340 kB' 'KernelStack: 5960 kB' 'PageTables: 3276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79508 kB' 'Slab: 252616 kB' 'SReclaimable: 79508 kB' 'SUnreclaim: 173108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.440 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.441 node0=512 expecting 512 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:03.441 node1=1024 expecting 1024 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:03.441 00:04:03.441 real 0m1.331s 00:04:03.441 user 0m0.561s 00:04:03.441 sys 0m0.729s 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.441 07:30:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.441 ************************************ 00:04:03.441 END TEST custom_alloc 00:04:03.441 ************************************ 00:04:03.441 07:30:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.441 07:30:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.441 07:30:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.441 07:30:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.441 07:30:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.441 ************************************ 00:04:03.441 START TEST no_shrink_alloc 00:04:03.441 ************************************ 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.441 07:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.825 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.825 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.825 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.825 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.825 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.825 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.825 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.825 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.825 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.825 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.825 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.825 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.825 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.825 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.825 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.825 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.825 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45560452 kB' 'MemAvailable: 49064804 kB' 'Buffers: 2704 kB' 'Cached: 10496748 kB' 'SwapCached: 0 kB' 'Active: 7509412 kB' 'Inactive: 3506596 kB' 'Active(anon): 7114820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519752 kB' 'Mapped: 204668 kB' 'Shmem: 6598264 kB' 'KReclaimable: 194008 kB' 'Slab: 560052 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366044 kB' 'KernelStack: 12752 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8221588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45567472 kB' 'MemAvailable: 49071824 kB' 'Buffers: 2704 kB' 'Cached: 10496752 kB' 'SwapCached: 0 kB' 'Active: 7509584 kB' 'Inactive: 3506596 kB' 'Active(anon): 7114992 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520000 kB' 'Mapped: 205052 kB' 'Shmem: 6598268 kB' 'KReclaimable: 194008 kB' 'Slab: 560024 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366016 kB' 'KernelStack: 12816 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8221604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.826 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45564252 kB' 'MemAvailable: 49068604 kB' 'Buffers: 2704 kB' 'Cached: 10496772 kB' 'SwapCached: 0 kB' 'Active: 7506148 kB' 'Inactive: 3506596 kB' 'Active(anon): 7111556 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516540 kB' 'Mapped: 204616 kB' 'Shmem: 6598288 kB' 'KReclaimable: 194008 kB' 'Slab: 560080 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366072 kB' 'KernelStack: 12768 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8218448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.828 nr_hugepages=1024 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.828 resv_hugepages=0 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.828 surplus_hugepages=0 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.828 anon_hugepages=0 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45560544 kB' 'MemAvailable: 49064896 kB' 'Buffers: 2704 kB' 'Cached: 10496792 kB' 'SwapCached: 0 kB' 'Active: 7509176 kB' 'Inactive: 3506596 kB' 'Active(anon): 7114584 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519520 kB' 'Mapped: 204616 kB' 'Shmem: 6598308 kB' 'KReclaimable: 194008 kB' 'Slab: 560080 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366072 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8221648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21408100 kB' 'MemUsed: 11468840 kB' 'SwapCached: 0 kB' 'Active: 5099864 kB' 'Inactive: 3264144 kB' 'Active(anon): 4911292 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081696 kB' 'Mapped: 70188 kB' 'AnonPages: 285484 kB' 'Shmem: 4628980 kB' 'KernelStack: 6776 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114500 kB' 'Slab: 307604 kB' 'SReclaimable: 114500 kB' 'SUnreclaim: 193104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.830 node0=1024 expecting 1024 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.830 07:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.767 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.767 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.767 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.767 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.767 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.767 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.767 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.767 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.767 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.767 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.767 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.767 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.767 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.767 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.032 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.032 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.032 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.032 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.032 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.032 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.032 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.032 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.032 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45559356 kB' 'MemAvailable: 49063708 kB' 'Buffers: 2704 kB' 'Cached: 10496860 kB' 'SwapCached: 0 kB' 'Active: 7503712 kB' 'Inactive: 3506596 kB' 'Active(anon): 7109120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514008 kB' 'Mapped: 204260 kB' 'Shmem: 6598376 kB' 'KReclaimable: 194008 kB' 'Slab: 560020 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 366012 kB' 'KernelStack: 12736 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8215712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.033 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45559356 kB' 'MemAvailable: 49063708 kB' 'Buffers: 2704 kB' 'Cached: 10496864 kB' 'SwapCached: 0 kB' 'Active: 7504148 kB' 'Inactive: 3506596 kB' 'Active(anon): 7109556 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514424 kB' 'Mapped: 204260 kB' 'Shmem: 6598380 kB' 'KReclaimable: 194008 kB' 'Slab: 560004 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 365996 kB' 'KernelStack: 12736 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8215728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.034 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.035 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45559980 kB' 'MemAvailable: 49064332 kB' 'Buffers: 2704 kB' 'Cached: 10496868 kB' 'SwapCached: 0 kB' 'Active: 7504460 kB' 'Inactive: 3506596 kB' 'Active(anon): 7109868 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514784 kB' 'Mapped: 204260 kB' 'Shmem: 6598384 kB' 'KReclaimable: 194008 kB' 'Slab: 560004 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 365996 kB' 'KernelStack: 12768 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8215752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.036 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.037 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.038 nr_hugepages=1024 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.038 resv_hugepages=0 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.038 surplus_hugepages=0 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.038 anon_hugepages=0 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45560352 kB' 'MemAvailable: 49064704 kB' 'Buffers: 2704 kB' 'Cached: 10496904 kB' 'SwapCached: 0 kB' 'Active: 7504112 kB' 'Inactive: 3506596 kB' 'Active(anon): 7109520 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514336 kB' 'Mapped: 204184 kB' 'Shmem: 6598420 kB' 'KReclaimable: 194008 kB' 'Slab: 559996 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 365988 kB' 'KernelStack: 12784 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8215772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 13889536 kB' 'DirectMap1G: 53477376 kB' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.038 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.039 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21405860 kB' 'MemUsed: 11471080 kB' 'SwapCached: 0 kB' 'Active: 5100036 kB' 'Inactive: 3264144 kB' 'Active(anon): 4911464 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081800 kB' 'Mapped: 70148 kB' 'AnonPages: 285564 kB' 'Shmem: 4629084 kB' 'KernelStack: 6792 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114500 kB' 'Slab: 307604 kB' 'SReclaimable: 114500 kB' 'SUnreclaim: 193104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.040 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.301 node0=1024 expecting 1024 00:04:06.301 07:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.301 00:04:06.301 real 0m2.707s 00:04:06.301 user 0m1.178s 00:04:06.301 sys 0m1.449s 00:04:06.302 07:30:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.302 07:30:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.302 ************************************ 00:04:06.302 END TEST no_shrink_alloc 00:04:06.302 ************************************ 00:04:06.302 07:30:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.302 07:30:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.302 00:04:06.302 real 0m11.040s 00:04:06.302 user 0m4.322s 00:04:06.302 sys 0m5.616s 00:04:06.302 07:30:57 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.302 07:30:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.302 ************************************ 00:04:06.302 END TEST hugepages 00:04:06.302 ************************************ 00:04:06.302 07:30:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.302 07:30:57 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.302 07:30:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.302 07:30:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.302 07:30:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.302 ************************************ 00:04:06.302 START TEST driver 00:04:06.302 ************************************ 00:04:06.302 07:30:57 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.302 * Looking for test storage... 00:04:06.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.302 07:30:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:06.302 07:30:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.302 07:30:57 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.836 07:30:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.836 07:30:59 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.836 07:30:59 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.836 07:30:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.836 ************************************ 00:04:08.836 START TEST guess_driver 00:04:08.836 ************************************ 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:08.836 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:08.836 Looking for driver=vfio-pci 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.836 07:30:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.216 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.217 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.217 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.217 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.217 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.217 07:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.156 07:31:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.712 00:04:13.712 real 0m4.732s 00:04:13.712 user 0m1.032s 00:04:13.712 sys 0m1.758s 00:04:13.712 07:31:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.712 07:31:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.712 ************************************ 00:04:13.712 END TEST guess_driver 00:04:13.712 ************************************ 00:04:13.712 07:31:04 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:13.712 00:04:13.712 real 0m7.387s 00:04:13.712 user 0m1.654s 00:04:13.712 sys 0m2.813s 00:04:13.712 07:31:04 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.712 07:31:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.712 ************************************ 00:04:13.712 END TEST driver 00:04:13.712 ************************************ 00:04:13.712 07:31:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.712 07:31:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.712 07:31:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.712 07:31:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.712 07:31:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.712 ************************************ 00:04:13.712 START TEST devices 00:04:13.712 ************************************ 00:04:13.712 07:31:04 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.712 * Looking for test storage... 00:04:13.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.712 07:31:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.712 07:31:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.712 07:31:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.712 07:31:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.089 07:31:06 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:15.089 07:31:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:15.089 07:31:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:15.089 07:31:06 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.089 No valid GPT data, bailing 00:04:15.090 07:31:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.090 07:31:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.090 07:31:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.090 07:31:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.090 07:31:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.090 07:31:06 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:15.090 07:31:06 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:15.090 07:31:06 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.090 07:31:06 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.090 07:31:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.090 ************************************ 00:04:15.090 START TEST nvme_mount 00:04:15.090 ************************************ 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.090 07:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:16.487 Creating new GPT entries in memory. 00:04:16.487 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.487 other utilities. 00:04:16.487 07:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.487 07:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.487 07:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.487 07:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.487 07:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:17.437 Creating new GPT entries in memory. 00:04:17.437 The operation has completed successfully. 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 924390 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.437 07:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.372 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.632 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.632 07:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.891 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.891 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.891 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.891 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.891 07:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:20.301 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.302 07:31:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.237 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.497 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.498 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.498 00:04:21.498 real 0m6.255s 00:04:21.498 user 0m1.424s 00:04:21.498 sys 0m2.392s 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.498 07:31:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:21.498 ************************************ 00:04:21.498 END TEST nvme_mount 00:04:21.498 ************************************ 00:04:21.498 07:31:12 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:21.498 07:31:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:21.498 07:31:12 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.498 07:31:12 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.498 07:31:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.498 ************************************ 00:04:21.498 START TEST dm_mount 00:04:21.498 ************************************ 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.498 07:31:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:22.436 Creating new GPT entries in memory. 00:04:22.436 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.436 other utilities. 00:04:22.436 07:31:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.436 07:31:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.436 07:31:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.436 07:31:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.436 07:31:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:23.816 Creating new GPT entries in memory. 00:04:23.816 The operation has completed successfully. 00:04:23.816 07:31:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.816 07:31:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.816 07:31:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.816 07:31:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.816 07:31:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:24.756 The operation has completed successfully. 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 926775 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.756 07:31:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.693 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.694 07:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.953 07:31:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.887 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.888 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:27.148 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:27.148 00:04:27.148 real 0m5.749s 00:04:27.148 user 0m0.979s 00:04:27.148 sys 0m1.636s 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.148 07:31:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:27.148 ************************************ 00:04:27.148 END TEST dm_mount 00:04:27.148 ************************************ 00:04:27.148 07:31:18 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.148 07:31:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.406 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.406 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.406 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.406 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.406 07:31:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:27.406 00:04:27.406 real 0m13.841s 00:04:27.406 user 0m2.997s 00:04:27.406 sys 0m5.036s 00:04:27.406 07:31:18 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.406 07:31:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.406 ************************************ 00:04:27.406 END TEST devices 00:04:27.406 ************************************ 00:04:27.665 07:31:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:27.665 00:04:27.665 real 0m42.893s 00:04:27.665 user 0m12.298s 00:04:27.665 sys 0m18.783s 00:04:27.665 07:31:18 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.665 07:31:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.665 ************************************ 00:04:27.665 END TEST setup.sh 00:04:27.665 ************************************ 00:04:27.665 07:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.665 07:31:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.602 Hugepages 00:04:28.602 node hugesize free / total 00:04:28.602 node0 1048576kB 0 / 0 00:04:28.602 node0 2048kB 2048 / 2048 00:04:28.602 node1 1048576kB 0 / 0 00:04:28.602 node1 2048kB 0 / 0 00:04:28.602 00:04:28.602 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.602 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:28.602 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:28.602 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:28.861 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:28.862 07:31:19 -- spdk/autotest.sh@130 -- # uname -s 00:04:28.862 07:31:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:28.862 07:31:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:28.862 07:31:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.796 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.796 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.796 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.055 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.055 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.055 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.055 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.055 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.055 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:31.027 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.027 07:31:22 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:31.965 07:31:23 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:31.965 07:31:23 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:31.965 07:31:23 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.965 07:31:23 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:31.965 07:31:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:31.965 07:31:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:31.965 07:31:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.965 07:31:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.965 07:31:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:32.222 07:31:23 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:32.222 07:31:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:32.222 07:31:23 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.157 Waiting for block devices as requested 00:04:33.157 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:33.415 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:33.415 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:33.415 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:33.674 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:33.674 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:33.674 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:33.674 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:33.933 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:33.933 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:33.933 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:33.933 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:34.193 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:34.193 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:34.193 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:34.193 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:34.452 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:34.452 07:31:25 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.452 07:31:25 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:34.452 07:31:25 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:34.452 07:31:25 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:34.452 07:31:25 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.452 07:31:25 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.452 07:31:25 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:34.452 07:31:25 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.452 07:31:25 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.452 07:31:25 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.452 07:31:25 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:34.452 07:31:25 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.452 07:31:25 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.452 07:31:25 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.452 07:31:25 -- common/autotest_common.sh@1557 -- # continue 00:04:34.452 07:31:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:34.452 07:31:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.452 07:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:34.452 07:31:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:34.452 07:31:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.452 07:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:34.452 07:31:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.861 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.861 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.861 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.798 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:36.799 07:31:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:36.799 07:31:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.799 07:31:27 -- common/autotest_common.sh@10 -- # set +x 00:04:36.799 07:31:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:36.799 07:31:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:36.799 07:31:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.799 07:31:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:36.799 07:31:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:36.799 07:31:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:36.799 07:31:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:36.799 07:31:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:36.799 07:31:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.799 07:31:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:36.799 07:31:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:36.799 07:31:27 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:36.799 07:31:27 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:36.799 07:31:27 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.799 07:31:27 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:36.799 07:31:27 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:36.799 07:31:27 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:36.799 07:31:27 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:36.799 07:31:27 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:36.799 07:31:27 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:36.799 07:31:27 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=931952 00:04:36.799 07:31:27 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.799 07:31:27 -- common/autotest_common.sh@1598 -- # waitforlisten 931952 00:04:36.799 07:31:27 -- common/autotest_common.sh@829 -- # '[' -z 931952 ']' 00:04:36.799 07:31:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.799 07:31:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.799 07:31:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.799 07:31:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.799 07:31:27 -- common/autotest_common.sh@10 -- # set +x 00:04:37.057 [2024-07-15 07:31:28.062358] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:37.057 [2024-07-15 07:31:28.062530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931952 ] 00:04:37.057 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.057 [2024-07-15 07:31:28.194939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.316 [2024-07-15 07:31:28.449811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.249 07:31:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.249 07:31:29 -- common/autotest_common.sh@862 -- # return 0 00:04:38.249 07:31:29 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:38.249 07:31:29 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:38.249 07:31:29 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:41.541 nvme0n1 00:04:41.541 07:31:32 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:41.541 [2024-07-15 07:31:32.704456] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:41.541 [2024-07-15 07:31:32.704534] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:41.541 request: 00:04:41.541 { 00:04:41.541 "nvme_ctrlr_name": "nvme0", 00:04:41.541 "password": "test", 00:04:41.541 "method": "bdev_nvme_opal_revert", 00:04:41.541 "req_id": 1 00:04:41.541 } 00:04:41.541 Got JSON-RPC error response 00:04:41.541 response: 00:04:41.541 { 00:04:41.541 "code": -32603, 00:04:41.541 "message": "Internal error" 00:04:41.541 } 00:04:41.541 07:31:32 -- common/autotest_common.sh@1604 -- # true 00:04:41.541 07:31:32 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:41.541 07:31:32 -- common/autotest_common.sh@1608 -- # killprocess 931952 00:04:41.541 07:31:32 -- common/autotest_common.sh@948 -- # '[' -z 931952 ']' 00:04:41.541 07:31:32 -- common/autotest_common.sh@952 -- # kill -0 931952 00:04:41.541 07:31:32 -- common/autotest_common.sh@953 -- # uname 00:04:41.541 07:31:32 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.541 07:31:32 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 931952 00:04:41.541 07:31:32 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.541 07:31:32 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.541 07:31:32 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 931952' 00:04:41.541 killing process with pid 931952 00:04:41.541 07:31:32 -- common/autotest_common.sh@967 -- # kill 931952 00:04:41.541 07:31:32 -- common/autotest_common.sh@972 -- # wait 931952 00:04:45.747 07:31:36 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:45.747 07:31:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:45.747 07:31:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.747 07:31:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.747 07:31:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:45.747 07:31:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.747 07:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:45.747 07:31:36 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:45.747 07:31:36 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.747 07:31:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.747 07:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.747 07:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:45.747 ************************************ 00:04:45.747 START TEST env 00:04:45.747 ************************************ 00:04:45.747 07:31:36 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.747 * Looking for test storage... 00:04:45.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:45.747 07:31:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.747 07:31:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.747 07:31:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.747 07:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.747 ************************************ 00:04:45.747 START TEST env_memory 00:04:45.747 ************************************ 00:04:45.747 07:31:36 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.747 00:04:45.747 00:04:45.747 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.747 http://cunit.sourceforge.net/ 00:04:45.747 00:04:45.747 00:04:45.747 Suite: memory 00:04:45.747 Test: alloc and free memory map ...[2024-07-15 07:31:36.682706] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.747 passed 00:04:45.747 Test: mem map translation ...[2024-07-15 07:31:36.723530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.747 [2024-07-15 07:31:36.723571] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.747 [2024-07-15 07:31:36.723645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.747 [2024-07-15 07:31:36.723674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.747 passed 00:04:45.747 Test: mem map registration ...[2024-07-15 07:31:36.791401] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:45.747 [2024-07-15 07:31:36.791452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:45.747 passed 00:04:45.747 Test: mem map adjacent registrations ...passed 00:04:45.747 00:04:45.747 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.747 suites 1 1 n/a 0 0 00:04:45.747 tests 4 4 4 0 0 00:04:45.747 asserts 152 152 152 0 n/a 00:04:45.747 00:04:45.747 Elapsed time = 0.239 seconds 00:04:45.747 00:04:45.747 real 0m0.258s 00:04:45.747 user 0m0.239s 00:04:45.747 sys 0m0.018s 00:04:45.747 07:31:36 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.747 07:31:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:45.747 ************************************ 00:04:45.747 END TEST env_memory 00:04:45.747 ************************************ 00:04:45.747 07:31:36 env -- common/autotest_common.sh@1142 -- # return 0 00:04:45.747 07:31:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.747 07:31:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.747 07:31:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.747 07:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.747 ************************************ 00:04:45.747 START TEST env_vtophys 00:04:45.747 ************************************ 00:04:45.748 07:31:36 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.748 EAL: lib.eal log level changed from notice to debug 00:04:45.748 EAL: Detected lcore 0 as core 0 on socket 0 00:04:45.748 EAL: Detected lcore 1 as core 1 on socket 0 00:04:45.748 EAL: Detected lcore 2 as core 2 on socket 0 00:04:45.748 EAL: Detected lcore 3 as core 3 on socket 0 00:04:45.748 EAL: Detected lcore 4 as core 4 on socket 0 00:04:45.748 EAL: Detected lcore 5 as core 5 on socket 0 00:04:45.748 EAL: Detected lcore 6 as core 8 on socket 0 00:04:45.748 EAL: Detected lcore 7 as core 9 on socket 0 00:04:45.748 EAL: Detected lcore 8 as core 10 on socket 0 00:04:45.748 EAL: Detected lcore 9 as core 11 on socket 0 00:04:45.748 EAL: Detected lcore 10 as core 12 on socket 0 00:04:45.748 EAL: Detected lcore 11 as core 13 on socket 0 00:04:45.748 EAL: Detected lcore 12 as core 0 on socket 1 00:04:45.748 EAL: Detected lcore 13 as core 1 on socket 1 00:04:45.748 EAL: Detected lcore 14 as core 2 on socket 1 00:04:45.748 EAL: Detected lcore 15 as core 3 on socket 1 00:04:45.748 EAL: Detected lcore 16 as core 4 on socket 1 00:04:45.748 EAL: Detected lcore 17 as core 5 on socket 1 00:04:45.748 EAL: Detected lcore 18 as core 8 on socket 1 00:04:45.748 EAL: Detected lcore 19 as core 9 on socket 1 00:04:45.748 EAL: Detected lcore 20 as core 10 on socket 1 00:04:45.748 EAL: Detected lcore 21 as core 11 on socket 1 00:04:45.748 EAL: Detected lcore 22 as core 12 on socket 1 00:04:45.748 EAL: Detected lcore 23 as core 13 on socket 1 00:04:45.748 EAL: Detected lcore 24 as core 0 on socket 0 00:04:45.748 EAL: Detected lcore 25 as core 1 on socket 0 00:04:45.748 EAL: Detected lcore 26 as core 2 on socket 0 00:04:45.748 EAL: Detected lcore 27 as core 3 on socket 0 00:04:45.748 EAL: Detected lcore 28 as core 4 on socket 0 00:04:45.748 EAL: Detected lcore 29 as core 5 on socket 0 00:04:45.748 EAL: Detected lcore 30 as core 8 on socket 0 00:04:45.748 EAL: Detected lcore 31 as core 9 on socket 0 00:04:45.748 EAL: Detected lcore 32 as core 10 on socket 0 00:04:45.748 EAL: Detected lcore 33 as core 11 on socket 0 00:04:45.748 EAL: Detected lcore 34 as core 12 on socket 0 00:04:45.748 EAL: Detected lcore 35 as core 13 on socket 0 00:04:45.748 EAL: Detected lcore 36 as core 0 on socket 1 00:04:45.748 EAL: Detected lcore 37 as core 1 on socket 1 00:04:45.748 EAL: Detected lcore 38 as core 2 on socket 1 00:04:45.748 EAL: Detected lcore 39 as core 3 on socket 1 00:04:45.748 EAL: Detected lcore 40 as core 4 on socket 1 00:04:45.748 EAL: Detected lcore 41 as core 5 on socket 1 00:04:45.748 EAL: Detected lcore 42 as core 8 on socket 1 00:04:45.748 EAL: Detected lcore 43 as core 9 on socket 1 00:04:45.748 EAL: Detected lcore 44 as core 10 on socket 1 00:04:45.748 EAL: Detected lcore 45 as core 11 on socket 1 00:04:45.748 EAL: Detected lcore 46 as core 12 on socket 1 00:04:45.748 EAL: Detected lcore 47 as core 13 on socket 1 00:04:46.008 EAL: Maximum logical cores by configuration: 128 00:04:46.008 EAL: Detected CPU lcores: 48 00:04:46.008 EAL: Detected NUMA nodes: 2 00:04:46.008 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.008 EAL: Detected shared linkage of DPDK 00:04:46.008 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.008 EAL: Bus pci wants IOVA as 'DC' 00:04:46.008 EAL: Buses did not request a specific IOVA mode. 00:04:46.008 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:46.008 EAL: Selected IOVA mode 'VA' 00:04:46.008 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.008 EAL: Probing VFIO support... 00:04:46.009 EAL: IOMMU type 1 (Type 1) is supported 00:04:46.009 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:46.009 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:46.009 EAL: VFIO support initialized 00:04:46.009 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.009 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.009 EAL: Setting up physically contiguous memory... 00:04:46.009 EAL: Setting maximum number of open files to 524288 00:04:46.009 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.009 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:46.009 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.009 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:46.009 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.009 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:46.009 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.009 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.009 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:46.009 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:46.009 EAL: Hugepages will be freed exactly as allocated. 00:04:46.009 EAL: No shared files mode enabled, IPC is disabled 00:04:46.009 EAL: No shared files mode enabled, IPC is disabled 00:04:46.009 EAL: TSC frequency is ~2700000 KHz 00:04:46.009 EAL: Main lcore 0 is ready (tid=7f5157d19a40;cpuset=[0]) 00:04:46.009 EAL: Trying to obtain current memory policy. 00:04:46.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.009 EAL: Restoring previous memory policy: 0 00:04:46.009 EAL: request: mp_malloc_sync 00:04:46.009 EAL: No shared files mode enabled, IPC is disabled 00:04:46.009 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.009 EAL: No shared files mode enabled, IPC is disabled 00:04:46.009 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.009 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.009 00:04:46.009 00:04:46.009 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.009 http://cunit.sourceforge.net/ 00:04:46.009 00:04:46.009 00:04:46.009 Suite: components_suite 00:04:46.579 Test: vtophys_malloc_test ...passed 00:04:46.579 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:46.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.579 EAL: Restoring previous memory policy: 4 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was expanded by 4MB 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was shrunk by 4MB 00:04:46.579 EAL: Trying to obtain current memory policy. 00:04:46.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.579 EAL: Restoring previous memory policy: 4 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was expanded by 6MB 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was shrunk by 6MB 00:04:46.579 EAL: Trying to obtain current memory policy. 00:04:46.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.579 EAL: Restoring previous memory policy: 4 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was expanded by 10MB 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was shrunk by 10MB 00:04:46.579 EAL: Trying to obtain current memory policy. 00:04:46.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.579 EAL: Restoring previous memory policy: 4 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was expanded by 18MB 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was shrunk by 18MB 00:04:46.579 EAL: Trying to obtain current memory policy. 00:04:46.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.579 EAL: Restoring previous memory policy: 4 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was expanded by 34MB 00:04:46.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.579 EAL: request: mp_malloc_sync 00:04:46.579 EAL: No shared files mode enabled, IPC is disabled 00:04:46.579 EAL: Heap on socket 0 was shrunk by 34MB 00:04:46.579 EAL: Trying to obtain current memory policy. 00:04:46.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.580 EAL: Restoring previous memory policy: 4 00:04:46.580 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.580 EAL: request: mp_malloc_sync 00:04:46.580 EAL: No shared files mode enabled, IPC is disabled 00:04:46.580 EAL: Heap on socket 0 was expanded by 66MB 00:04:46.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.839 EAL: request: mp_malloc_sync 00:04:46.839 EAL: No shared files mode enabled, IPC is disabled 00:04:46.839 EAL: Heap on socket 0 was shrunk by 66MB 00:04:46.839 EAL: Trying to obtain current memory policy. 00:04:46.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.839 EAL: Restoring previous memory policy: 4 00:04:46.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.839 EAL: request: mp_malloc_sync 00:04:46.839 EAL: No shared files mode enabled, IPC is disabled 00:04:46.839 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.098 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.098 EAL: request: mp_malloc_sync 00:04:47.098 EAL: No shared files mode enabled, IPC is disabled 00:04:47.098 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.358 EAL: Trying to obtain current memory policy. 00:04:47.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.358 EAL: Restoring previous memory policy: 4 00:04:47.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.358 EAL: request: mp_malloc_sync 00:04:47.358 EAL: No shared files mode enabled, IPC is disabled 00:04:47.358 EAL: Heap on socket 0 was expanded by 258MB 00:04:47.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.927 EAL: request: mp_malloc_sync 00:04:47.927 EAL: No shared files mode enabled, IPC is disabled 00:04:47.927 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.495 EAL: Trying to obtain current memory policy. 00:04:48.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.495 EAL: Restoring previous memory policy: 4 00:04:48.495 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.495 EAL: request: mp_malloc_sync 00:04:48.495 EAL: No shared files mode enabled, IPC is disabled 00:04:48.495 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.432 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.691 EAL: request: mp_malloc_sync 00:04:49.691 EAL: No shared files mode enabled, IPC is disabled 00:04:49.691 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.259 EAL: Trying to obtain current memory policy. 00:04:50.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.826 EAL: Restoring previous memory policy: 4 00:04:50.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.826 EAL: request: mp_malloc_sync 00:04:50.826 EAL: No shared files mode enabled, IPC is disabled 00:04:50.826 EAL: Heap on socket 0 was expanded by 1026MB 00:04:52.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.734 EAL: request: mp_malloc_sync 00:04:52.734 EAL: No shared files mode enabled, IPC is disabled 00:04:52.734 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.677 passed 00:04:54.677 00:04:54.677 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.677 suites 1 1 n/a 0 0 00:04:54.677 tests 2 2 2 0 0 00:04:54.677 asserts 497 497 497 0 n/a 00:04:54.677 00:04:54.677 Elapsed time = 8.339 seconds 00:04:54.677 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.677 EAL: request: mp_malloc_sync 00:04:54.677 EAL: No shared files mode enabled, IPC is disabled 00:04:54.677 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.677 EAL: No shared files mode enabled, IPC is disabled 00:04:54.677 EAL: No shared files mode enabled, IPC is disabled 00:04:54.677 EAL: No shared files mode enabled, IPC is disabled 00:04:54.677 00:04:54.677 real 0m8.609s 00:04:54.677 user 0m7.517s 00:04:54.677 sys 0m1.029s 00:04:54.677 07:31:45 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.677 07:31:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.677 ************************************ 00:04:54.677 END TEST env_vtophys 00:04:54.677 ************************************ 00:04:54.677 07:31:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:54.677 07:31:45 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.677 07:31:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.677 07:31:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.677 07:31:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.677 ************************************ 00:04:54.677 START TEST env_pci 00:04:54.677 ************************************ 00:04:54.677 07:31:45 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.677 00:04:54.677 00:04:54.677 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.677 http://cunit.sourceforge.net/ 00:04:54.677 00:04:54.677 00:04:54.677 Suite: pci 00:04:54.677 Test: pci_hook ...[2024-07-15 07:31:45.624055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 934041 has claimed it 00:04:54.677 EAL: Cannot find device (10000:00:01.0) 00:04:54.677 EAL: Failed to attach device on primary process 00:04:54.677 passed 00:04:54.677 00:04:54.677 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.677 suites 1 1 n/a 0 0 00:04:54.677 tests 1 1 1 0 0 00:04:54.677 asserts 25 25 25 0 n/a 00:04:54.677 00:04:54.677 Elapsed time = 0.043 seconds 00:04:54.677 00:04:54.677 real 0m0.094s 00:04:54.677 user 0m0.040s 00:04:54.677 sys 0m0.054s 00:04:54.677 07:31:45 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.677 07:31:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.677 ************************************ 00:04:54.677 END TEST env_pci 00:04:54.677 ************************************ 00:04:54.677 07:31:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:54.677 07:31:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.677 07:31:45 env -- env/env.sh@15 -- # uname 00:04:54.677 07:31:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.677 07:31:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.677 07:31:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.677 07:31:45 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:54.677 07:31:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.677 07:31:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.677 ************************************ 00:04:54.677 START TEST env_dpdk_post_init 00:04:54.677 ************************************ 00:04:54.677 07:31:45 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.677 EAL: Detected CPU lcores: 48 00:04:54.677 EAL: Detected NUMA nodes: 2 00:04:54.677 EAL: Detected shared linkage of DPDK 00:04:54.677 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.677 EAL: Selected IOVA mode 'VA' 00:04:54.677 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.677 EAL: VFIO support initialized 00:04:54.677 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.937 EAL: Using IOMMU type 1 (Type 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:54.937 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:55.877 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:59.162 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:59.162 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:59.162 Starting DPDK initialization... 00:04:59.162 Starting SPDK post initialization... 00:04:59.162 SPDK NVMe probe 00:04:59.162 Attaching to 0000:88:00.0 00:04:59.162 Attached to 0000:88:00.0 00:04:59.162 Cleaning up... 00:04:59.162 00:04:59.162 real 0m4.565s 00:04:59.162 user 0m3.383s 00:04:59.162 sys 0m0.240s 00:04:59.162 07:31:50 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.162 07:31:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.162 ************************************ 00:04:59.162 END TEST env_dpdk_post_init 00:04:59.162 ************************************ 00:04:59.162 07:31:50 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.162 07:31:50 env -- env/env.sh@26 -- # uname 00:04:59.162 07:31:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.162 07:31:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.162 07:31:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.162 07:31:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.162 07:31:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.162 ************************************ 00:04:59.162 START TEST env_mem_callbacks 00:04:59.162 ************************************ 00:04:59.162 07:31:50 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.162 EAL: Detected CPU lcores: 48 00:04:59.162 EAL: Detected NUMA nodes: 2 00:04:59.162 EAL: Detected shared linkage of DPDK 00:04:59.421 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.421 EAL: Selected IOVA mode 'VA' 00:04:59.421 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.421 EAL: VFIO support initialized 00:04:59.421 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.421 00:04:59.421 00:04:59.421 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.421 http://cunit.sourceforge.net/ 00:04:59.421 00:04:59.421 00:04:59.421 Suite: memory 00:04:59.421 Test: test ... 00:04:59.421 register 0x200000200000 2097152 00:04:59.421 malloc 3145728 00:04:59.421 register 0x200000400000 4194304 00:04:59.421 buf 0x2000004fffc0 len 3145728 PASSED 00:04:59.421 malloc 64 00:04:59.421 buf 0x2000004ffec0 len 64 PASSED 00:04:59.421 malloc 4194304 00:04:59.421 register 0x200000800000 6291456 00:04:59.421 buf 0x2000009fffc0 len 4194304 PASSED 00:04:59.421 free 0x2000004fffc0 3145728 00:04:59.421 free 0x2000004ffec0 64 00:04:59.421 unregister 0x200000400000 4194304 PASSED 00:04:59.421 free 0x2000009fffc0 4194304 00:04:59.421 unregister 0x200000800000 6291456 PASSED 00:04:59.421 malloc 8388608 00:04:59.421 register 0x200000400000 10485760 00:04:59.421 buf 0x2000005fffc0 len 8388608 PASSED 00:04:59.421 free 0x2000005fffc0 8388608 00:04:59.421 unregister 0x200000400000 10485760 PASSED 00:04:59.421 passed 00:04:59.421 00:04:59.421 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.421 suites 1 1 n/a 0 0 00:04:59.421 tests 1 1 1 0 0 00:04:59.421 asserts 15 15 15 0 n/a 00:04:59.421 00:04:59.421 Elapsed time = 0.060 seconds 00:04:59.421 00:04:59.421 real 0m0.178s 00:04:59.421 user 0m0.097s 00:04:59.421 sys 0m0.080s 00:04:59.421 07:31:50 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.421 07:31:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.421 ************************************ 00:04:59.421 END TEST env_mem_callbacks 00:04:59.421 ************************************ 00:04:59.421 07:31:50 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.421 00:04:59.421 real 0m13.985s 00:04:59.421 user 0m11.393s 00:04:59.421 sys 0m1.605s 00:04:59.421 07:31:50 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.421 07:31:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.421 ************************************ 00:04:59.421 END TEST env 00:04:59.421 ************************************ 00:04:59.421 07:31:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.421 07:31:50 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.421 07:31:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.421 07:31:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.421 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.421 ************************************ 00:04:59.421 START TEST rpc 00:04:59.421 ************************************ 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.421 * Looking for test storage... 00:04:59.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.421 07:31:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=934823 00:04:59.421 07:31:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:59.421 07:31:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.421 07:31:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 934823 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@829 -- # '[' -z 934823 ']' 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.421 07:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.680 [2024-07-15 07:31:50.726020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:59.680 [2024-07-15 07:31:50.726173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934823 ] 00:04:59.680 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.680 [2024-07-15 07:31:50.850889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.939 [2024-07-15 07:31:51.103237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.939 [2024-07-15 07:31:51.103329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 934823' to capture a snapshot of events at runtime. 00:04:59.939 [2024-07-15 07:31:51.103355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.939 [2024-07-15 07:31:51.103386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.939 [2024-07-15 07:31:51.103405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid934823 for offline analysis/debug. 00:04:59.939 [2024-07-15 07:31:51.103460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.874 07:31:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.874 07:31:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.874 07:31:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.874 07:31:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.874 07:31:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.874 07:31:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.874 07:31:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.874 07:31:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.874 07:31:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.874 ************************************ 00:05:00.874 START TEST rpc_integrity 00:05:00.874 ************************************ 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.875 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.875 { 00:05:00.875 "name": "Malloc0", 00:05:00.875 "aliases": [ 00:05:00.875 "babebbda-067d-497e-ab53-456498132f0c" 00:05:00.875 ], 00:05:00.875 "product_name": "Malloc disk", 00:05:00.875 "block_size": 512, 00:05:00.875 "num_blocks": 16384, 00:05:00.875 "uuid": "babebbda-067d-497e-ab53-456498132f0c", 00:05:00.875 "assigned_rate_limits": { 00:05:00.875 "rw_ios_per_sec": 0, 00:05:00.875 "rw_mbytes_per_sec": 0, 00:05:00.875 "r_mbytes_per_sec": 0, 00:05:00.875 "w_mbytes_per_sec": 0 00:05:00.875 }, 00:05:00.875 "claimed": false, 00:05:00.875 "zoned": false, 00:05:00.875 "supported_io_types": { 00:05:00.875 "read": true, 00:05:00.875 "write": true, 00:05:00.875 "unmap": true, 00:05:00.875 "flush": true, 00:05:00.875 "reset": true, 00:05:00.875 "nvme_admin": false, 00:05:00.875 "nvme_io": false, 00:05:00.875 "nvme_io_md": false, 00:05:00.875 "write_zeroes": true, 00:05:00.875 "zcopy": true, 00:05:00.875 "get_zone_info": false, 00:05:00.875 "zone_management": false, 00:05:00.875 "zone_append": false, 00:05:00.875 "compare": false, 00:05:00.875 "compare_and_write": false, 00:05:00.875 "abort": true, 00:05:00.875 "seek_hole": false, 00:05:00.875 "seek_data": false, 00:05:00.875 "copy": true, 00:05:00.875 "nvme_iov_md": false 00:05:00.875 }, 00:05:00.875 "memory_domains": [ 00:05:00.875 { 00:05:00.875 "dma_device_id": "system", 00:05:00.875 "dma_device_type": 1 00:05:00.875 }, 00:05:00.875 { 00:05:00.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.875 "dma_device_type": 2 00:05:00.875 } 00:05:00.875 ], 00:05:00.875 "driver_specific": {} 00:05:00.875 } 00:05:00.875 ]' 00:05:00.875 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 [2024-07-15 07:31:52.139276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:01.133 [2024-07-15 07:31:52.139363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.133 [2024-07-15 07:31:52.139407] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:05:01.133 [2024-07-15 07:31:52.139436] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.133 [2024-07-15 07:31:52.142199] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.133 [2024-07-15 07:31:52.142257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.133 Passthru0 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.133 { 00:05:01.133 "name": "Malloc0", 00:05:01.133 "aliases": [ 00:05:01.133 "babebbda-067d-497e-ab53-456498132f0c" 00:05:01.133 ], 00:05:01.133 "product_name": "Malloc disk", 00:05:01.133 "block_size": 512, 00:05:01.133 "num_blocks": 16384, 00:05:01.133 "uuid": "babebbda-067d-497e-ab53-456498132f0c", 00:05:01.133 "assigned_rate_limits": { 00:05:01.133 "rw_ios_per_sec": 0, 00:05:01.133 "rw_mbytes_per_sec": 0, 00:05:01.133 "r_mbytes_per_sec": 0, 00:05:01.133 "w_mbytes_per_sec": 0 00:05:01.133 }, 00:05:01.133 "claimed": true, 00:05:01.133 "claim_type": "exclusive_write", 00:05:01.133 "zoned": false, 00:05:01.133 "supported_io_types": { 00:05:01.133 "read": true, 00:05:01.133 "write": true, 00:05:01.133 "unmap": true, 00:05:01.133 "flush": true, 00:05:01.133 "reset": true, 00:05:01.133 "nvme_admin": false, 00:05:01.133 "nvme_io": false, 00:05:01.133 "nvme_io_md": false, 00:05:01.133 "write_zeroes": true, 00:05:01.133 "zcopy": true, 00:05:01.133 "get_zone_info": false, 00:05:01.133 "zone_management": false, 00:05:01.133 "zone_append": false, 00:05:01.133 "compare": false, 00:05:01.133 "compare_and_write": false, 00:05:01.133 "abort": true, 00:05:01.133 "seek_hole": false, 00:05:01.133 "seek_data": false, 00:05:01.133 "copy": true, 00:05:01.133 "nvme_iov_md": false 00:05:01.133 }, 00:05:01.133 "memory_domains": [ 00:05:01.133 { 00:05:01.133 "dma_device_id": "system", 00:05:01.133 "dma_device_type": 1 00:05:01.133 }, 00:05:01.133 { 00:05:01.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.133 "dma_device_type": 2 00:05:01.133 } 00:05:01.133 ], 00:05:01.133 "driver_specific": {} 00:05:01.133 }, 00:05:01.133 { 00:05:01.133 "name": "Passthru0", 00:05:01.133 "aliases": [ 00:05:01.133 "ed137b81-0496-5bdc-9099-6e3ad5499dc3" 00:05:01.133 ], 00:05:01.133 "product_name": "passthru", 00:05:01.133 "block_size": 512, 00:05:01.133 "num_blocks": 16384, 00:05:01.133 "uuid": "ed137b81-0496-5bdc-9099-6e3ad5499dc3", 00:05:01.133 "assigned_rate_limits": { 00:05:01.133 "rw_ios_per_sec": 0, 00:05:01.133 "rw_mbytes_per_sec": 0, 00:05:01.133 "r_mbytes_per_sec": 0, 00:05:01.133 "w_mbytes_per_sec": 0 00:05:01.133 }, 00:05:01.133 "claimed": false, 00:05:01.133 "zoned": false, 00:05:01.133 "supported_io_types": { 00:05:01.133 "read": true, 00:05:01.133 "write": true, 00:05:01.133 "unmap": true, 00:05:01.133 "flush": true, 00:05:01.133 "reset": true, 00:05:01.133 "nvme_admin": false, 00:05:01.133 "nvme_io": false, 00:05:01.133 "nvme_io_md": false, 00:05:01.133 "write_zeroes": true, 00:05:01.133 "zcopy": true, 00:05:01.133 "get_zone_info": false, 00:05:01.133 "zone_management": false, 00:05:01.133 "zone_append": false, 00:05:01.133 "compare": false, 00:05:01.133 "compare_and_write": false, 00:05:01.133 "abort": true, 00:05:01.133 "seek_hole": false, 00:05:01.133 "seek_data": false, 00:05:01.133 "copy": true, 00:05:01.133 "nvme_iov_md": false 00:05:01.133 }, 00:05:01.133 "memory_domains": [ 00:05:01.133 { 00:05:01.133 "dma_device_id": "system", 00:05:01.133 "dma_device_type": 1 00:05:01.133 }, 00:05:01.133 { 00:05:01.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.133 "dma_device_type": 2 00:05:01.133 } 00:05:01.133 ], 00:05:01.133 "driver_specific": { 00:05:01.133 "passthru": { 00:05:01.133 "name": "Passthru0", 00:05:01.133 "base_bdev_name": "Malloc0" 00:05:01.133 } 00:05:01.133 } 00:05:01.133 } 00:05:01.133 ]' 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.133 07:31:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.133 00:05:01.133 real 0m0.263s 00:05:01.133 user 0m0.150s 00:05:01.133 sys 0m0.024s 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 ************************************ 00:05:01.133 END TEST rpc_integrity 00:05:01.133 ************************************ 00:05:01.133 07:31:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.133 07:31:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:01.133 07:31:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.133 07:31:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.133 07:31:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.133 ************************************ 00:05:01.133 START TEST rpc_plugins 00:05:01.133 ************************************ 00:05:01.133 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:01.133 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:01.133 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.133 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.134 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.134 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:01.134 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:01.134 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.134 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.134 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.134 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:01.134 { 00:05:01.134 "name": "Malloc1", 00:05:01.134 "aliases": [ 00:05:01.134 "ac0f097c-018e-484c-a5c9-8c9c4d5ad555" 00:05:01.134 ], 00:05:01.134 "product_name": "Malloc disk", 00:05:01.134 "block_size": 4096, 00:05:01.134 "num_blocks": 256, 00:05:01.134 "uuid": "ac0f097c-018e-484c-a5c9-8c9c4d5ad555", 00:05:01.134 "assigned_rate_limits": { 00:05:01.134 "rw_ios_per_sec": 0, 00:05:01.134 "rw_mbytes_per_sec": 0, 00:05:01.134 "r_mbytes_per_sec": 0, 00:05:01.134 "w_mbytes_per_sec": 0 00:05:01.134 }, 00:05:01.134 "claimed": false, 00:05:01.134 "zoned": false, 00:05:01.134 "supported_io_types": { 00:05:01.134 "read": true, 00:05:01.134 "write": true, 00:05:01.134 "unmap": true, 00:05:01.134 "flush": true, 00:05:01.134 "reset": true, 00:05:01.134 "nvme_admin": false, 00:05:01.134 "nvme_io": false, 00:05:01.134 "nvme_io_md": false, 00:05:01.134 "write_zeroes": true, 00:05:01.134 "zcopy": true, 00:05:01.134 "get_zone_info": false, 00:05:01.134 "zone_management": false, 00:05:01.134 "zone_append": false, 00:05:01.134 "compare": false, 00:05:01.134 "compare_and_write": false, 00:05:01.134 "abort": true, 00:05:01.134 "seek_hole": false, 00:05:01.134 "seek_data": false, 00:05:01.134 "copy": true, 00:05:01.134 "nvme_iov_md": false 00:05:01.134 }, 00:05:01.134 "memory_domains": [ 00:05:01.134 { 00:05:01.134 "dma_device_id": "system", 00:05:01.134 "dma_device_type": 1 00:05:01.134 }, 00:05:01.134 { 00:05:01.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.134 "dma_device_type": 2 00:05:01.134 } 00:05:01.134 ], 00:05:01.134 "driver_specific": {} 00:05:01.134 } 00:05:01.134 ]' 00:05:01.134 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:01.393 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:01.393 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.393 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.393 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:01.393 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:01.393 07:31:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:01.393 00:05:01.393 real 0m0.115s 00:05:01.393 user 0m0.072s 00:05:01.393 sys 0m0.011s 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.393 07:31:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.393 ************************************ 00:05:01.393 END TEST rpc_plugins 00:05:01.393 ************************************ 00:05:01.393 07:31:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.393 07:31:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:01.393 07:31:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.393 07:31:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.393 07:31:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.393 ************************************ 00:05:01.393 START TEST rpc_trace_cmd_test 00:05:01.393 ************************************ 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:01.393 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid934823", 00:05:01.393 "tpoint_group_mask": "0x8", 00:05:01.393 "iscsi_conn": { 00:05:01.393 "mask": "0x2", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "scsi": { 00:05:01.393 "mask": "0x4", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "bdev": { 00:05:01.393 "mask": "0x8", 00:05:01.393 "tpoint_mask": "0xffffffffffffffff" 00:05:01.393 }, 00:05:01.393 "nvmf_rdma": { 00:05:01.393 "mask": "0x10", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "nvmf_tcp": { 00:05:01.393 "mask": "0x20", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "ftl": { 00:05:01.393 "mask": "0x40", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "blobfs": { 00:05:01.393 "mask": "0x80", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "dsa": { 00:05:01.393 "mask": "0x200", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "thread": { 00:05:01.393 "mask": "0x400", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "nvme_pcie": { 00:05:01.393 "mask": "0x800", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "iaa": { 00:05:01.393 "mask": "0x1000", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "nvme_tcp": { 00:05:01.393 "mask": "0x2000", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "bdev_nvme": { 00:05:01.393 "mask": "0x4000", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 }, 00:05:01.393 "sock": { 00:05:01.393 "mask": "0x8000", 00:05:01.393 "tpoint_mask": "0x0" 00:05:01.393 } 00:05:01.393 }' 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:01.393 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:01.653 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:01.653 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:01.653 07:31:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:01.653 00:05:01.653 real 0m0.198s 00:05:01.653 user 0m0.174s 00:05:01.653 sys 0m0.015s 00:05:01.653 07:31:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.653 07:31:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.653 ************************************ 00:05:01.653 END TEST rpc_trace_cmd_test 00:05:01.653 ************************************ 00:05:01.653 07:31:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.653 07:31:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:01.653 07:31:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:01.653 07:31:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:01.653 07:31:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.653 07:31:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.653 07:31:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.653 ************************************ 00:05:01.653 START TEST rpc_daemon_integrity 00:05:01.653 ************************************ 00:05:01.653 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:01.653 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.653 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.653 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.654 { 00:05:01.654 "name": "Malloc2", 00:05:01.654 "aliases": [ 00:05:01.654 "5eaf68da-6771-448a-a84a-f3fe2e2a806f" 00:05:01.654 ], 00:05:01.654 "product_name": "Malloc disk", 00:05:01.654 "block_size": 512, 00:05:01.654 "num_blocks": 16384, 00:05:01.654 "uuid": "5eaf68da-6771-448a-a84a-f3fe2e2a806f", 00:05:01.654 "assigned_rate_limits": { 00:05:01.654 "rw_ios_per_sec": 0, 00:05:01.654 "rw_mbytes_per_sec": 0, 00:05:01.654 "r_mbytes_per_sec": 0, 00:05:01.654 "w_mbytes_per_sec": 0 00:05:01.654 }, 00:05:01.654 "claimed": false, 00:05:01.654 "zoned": false, 00:05:01.654 "supported_io_types": { 00:05:01.654 "read": true, 00:05:01.654 "write": true, 00:05:01.654 "unmap": true, 00:05:01.654 "flush": true, 00:05:01.654 "reset": true, 00:05:01.654 "nvme_admin": false, 00:05:01.654 "nvme_io": false, 00:05:01.654 "nvme_io_md": false, 00:05:01.654 "write_zeroes": true, 00:05:01.654 "zcopy": true, 00:05:01.654 "get_zone_info": false, 00:05:01.654 "zone_management": false, 00:05:01.654 "zone_append": false, 00:05:01.654 "compare": false, 00:05:01.654 "compare_and_write": false, 00:05:01.654 "abort": true, 00:05:01.654 "seek_hole": false, 00:05:01.654 "seek_data": false, 00:05:01.654 "copy": true, 00:05:01.654 "nvme_iov_md": false 00:05:01.654 }, 00:05:01.654 "memory_domains": [ 00:05:01.654 { 00:05:01.654 "dma_device_id": "system", 00:05:01.654 "dma_device_type": 1 00:05:01.654 }, 00:05:01.654 { 00:05:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.654 "dma_device_type": 2 00:05:01.654 } 00:05:01.654 ], 00:05:01.654 "driver_specific": {} 00:05:01.654 } 00:05:01.654 ]' 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.654 [2024-07-15 07:31:52.848759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.654 [2024-07-15 07:31:52.848835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.654 [2024-07-15 07:31:52.848887] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:05:01.654 [2024-07-15 07:31:52.848933] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.654 [2024-07-15 07:31:52.851599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.654 [2024-07-15 07:31:52.851643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.654 Passthru0 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.654 { 00:05:01.654 "name": "Malloc2", 00:05:01.654 "aliases": [ 00:05:01.654 "5eaf68da-6771-448a-a84a-f3fe2e2a806f" 00:05:01.654 ], 00:05:01.654 "product_name": "Malloc disk", 00:05:01.654 "block_size": 512, 00:05:01.654 "num_blocks": 16384, 00:05:01.654 "uuid": "5eaf68da-6771-448a-a84a-f3fe2e2a806f", 00:05:01.654 "assigned_rate_limits": { 00:05:01.654 "rw_ios_per_sec": 0, 00:05:01.654 "rw_mbytes_per_sec": 0, 00:05:01.654 "r_mbytes_per_sec": 0, 00:05:01.654 "w_mbytes_per_sec": 0 00:05:01.654 }, 00:05:01.654 "claimed": true, 00:05:01.654 "claim_type": "exclusive_write", 00:05:01.654 "zoned": false, 00:05:01.654 "supported_io_types": { 00:05:01.654 "read": true, 00:05:01.654 "write": true, 00:05:01.654 "unmap": true, 00:05:01.654 "flush": true, 00:05:01.654 "reset": true, 00:05:01.654 "nvme_admin": false, 00:05:01.654 "nvme_io": false, 00:05:01.654 "nvme_io_md": false, 00:05:01.654 "write_zeroes": true, 00:05:01.654 "zcopy": true, 00:05:01.654 "get_zone_info": false, 00:05:01.654 "zone_management": false, 00:05:01.654 "zone_append": false, 00:05:01.654 "compare": false, 00:05:01.654 "compare_and_write": false, 00:05:01.654 "abort": true, 00:05:01.654 "seek_hole": false, 00:05:01.654 "seek_data": false, 00:05:01.654 "copy": true, 00:05:01.654 "nvme_iov_md": false 00:05:01.654 }, 00:05:01.654 "memory_domains": [ 00:05:01.654 { 00:05:01.654 "dma_device_id": "system", 00:05:01.654 "dma_device_type": 1 00:05:01.654 }, 00:05:01.654 { 00:05:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.654 "dma_device_type": 2 00:05:01.654 } 00:05:01.654 ], 00:05:01.654 "driver_specific": {} 00:05:01.654 }, 00:05:01.654 { 00:05:01.654 "name": "Passthru0", 00:05:01.654 "aliases": [ 00:05:01.654 "1f900720-69df-5c12-842f-9874cb8ab6c5" 00:05:01.654 ], 00:05:01.654 "product_name": "passthru", 00:05:01.654 "block_size": 512, 00:05:01.654 "num_blocks": 16384, 00:05:01.654 "uuid": "1f900720-69df-5c12-842f-9874cb8ab6c5", 00:05:01.654 "assigned_rate_limits": { 00:05:01.654 "rw_ios_per_sec": 0, 00:05:01.654 "rw_mbytes_per_sec": 0, 00:05:01.654 "r_mbytes_per_sec": 0, 00:05:01.654 "w_mbytes_per_sec": 0 00:05:01.654 }, 00:05:01.654 "claimed": false, 00:05:01.654 "zoned": false, 00:05:01.654 "supported_io_types": { 00:05:01.654 "read": true, 00:05:01.654 "write": true, 00:05:01.654 "unmap": true, 00:05:01.654 "flush": true, 00:05:01.654 "reset": true, 00:05:01.654 "nvme_admin": false, 00:05:01.654 "nvme_io": false, 00:05:01.654 "nvme_io_md": false, 00:05:01.654 "write_zeroes": true, 00:05:01.654 "zcopy": true, 00:05:01.654 "get_zone_info": false, 00:05:01.654 "zone_management": false, 00:05:01.654 "zone_append": false, 00:05:01.654 "compare": false, 00:05:01.654 "compare_and_write": false, 00:05:01.654 "abort": true, 00:05:01.654 "seek_hole": false, 00:05:01.654 "seek_data": false, 00:05:01.654 "copy": true, 00:05:01.654 "nvme_iov_md": false 00:05:01.654 }, 00:05:01.654 "memory_domains": [ 00:05:01.654 { 00:05:01.654 "dma_device_id": "system", 00:05:01.654 "dma_device_type": 1 00:05:01.654 }, 00:05:01.654 { 00:05:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.654 "dma_device_type": 2 00:05:01.654 } 00:05:01.654 ], 00:05:01.654 "driver_specific": { 00:05:01.654 "passthru": { 00:05:01.654 "name": "Passthru0", 00:05:01.654 "base_bdev_name": "Malloc2" 00:05:01.654 } 00:05:01.654 } 00:05:01.654 } 00:05:01.654 ]' 00:05:01.654 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.913 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.914 00:05:01.914 real 0m0.260s 00:05:01.914 user 0m0.152s 00:05:01.914 sys 0m0.020s 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.914 07:31:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.914 ************************************ 00:05:01.914 END TEST rpc_daemon_integrity 00:05:01.914 ************************************ 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.914 07:31:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.914 07:31:53 rpc -- rpc/rpc.sh@84 -- # killprocess 934823 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@948 -- # '[' -z 934823 ']' 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@952 -- # kill -0 934823 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@953 -- # uname 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 934823 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 934823' 00:05:01.914 killing process with pid 934823 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@967 -- # kill 934823 00:05:01.914 07:31:53 rpc -- common/autotest_common.sh@972 -- # wait 934823 00:05:04.448 00:05:04.448 real 0m4.970s 00:05:04.448 user 0m5.464s 00:05:04.448 sys 0m0.792s 00:05:04.448 07:31:55 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.448 07:31:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.448 ************************************ 00:05:04.448 END TEST rpc 00:05:04.448 ************************************ 00:05:04.448 07:31:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.448 07:31:55 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.448 07:31:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.448 07:31:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.448 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:05:04.448 ************************************ 00:05:04.448 START TEST skip_rpc 00:05:04.448 ************************************ 00:05:04.448 07:31:55 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.448 * Looking for test storage... 00:05:04.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.448 07:31:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.448 07:31:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.448 07:31:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.448 07:31:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.448 07:31:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.448 07:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.448 ************************************ 00:05:04.448 START TEST skip_rpc 00:05:04.448 ************************************ 00:05:04.448 07:31:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:04.448 07:31:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=935544 00:05:04.448 07:31:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.448 07:31:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.448 07:31:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.706 [2024-07-15 07:31:55.763204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:04.706 [2024-07-15 07:31:55.763364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935544 ] 00:05:04.706 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.706 [2024-07-15 07:31:55.888413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.018 [2024-07-15 07:31:56.145644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 935544 00:05:10.286 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 935544 ']' 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 935544 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935544 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935544' 00:05:10.287 killing process with pid 935544 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 935544 00:05:10.287 07:32:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 935544 00:05:12.193 00:05:12.193 real 0m7.531s 00:05:12.193 user 0m7.026s 00:05:12.193 sys 0m0.485s 00:05:12.193 07:32:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.193 07:32:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.193 ************************************ 00:05:12.193 END TEST skip_rpc 00:05:12.193 ************************************ 00:05:12.193 07:32:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.193 07:32:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:12.193 07:32:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.193 07:32:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.193 07:32:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.193 ************************************ 00:05:12.193 START TEST skip_rpc_with_json 00:05:12.193 ************************************ 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=936485 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 936485 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 936485 ']' 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.193 07:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.193 [2024-07-15 07:32:03.352509] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:12.193 [2024-07-15 07:32:03.352650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936485 ] 00:05:12.451 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.451 [2024-07-15 07:32:03.477120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.725 [2024-07-15 07:32:03.728421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 [2024-07-15 07:32:04.593831] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:13.705 request: 00:05:13.705 { 00:05:13.705 "trtype": "tcp", 00:05:13.705 "method": "nvmf_get_transports", 00:05:13.705 "req_id": 1 00:05:13.705 } 00:05:13.705 Got JSON-RPC error response 00:05:13.705 response: 00:05:13.705 { 00:05:13.705 "code": -19, 00:05:13.705 "message": "No such device" 00:05:13.705 } 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 [2024-07-15 07:32:04.602006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.705 { 00:05:13.705 "subsystems": [ 00:05:13.705 { 00:05:13.705 "subsystem": "keyring", 00:05:13.705 "config": [] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "iobuf", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "iobuf_set_options", 00:05:13.705 "params": { 00:05:13.705 "small_pool_count": 8192, 00:05:13.705 "large_pool_count": 1024, 00:05:13.705 "small_bufsize": 8192, 00:05:13.705 "large_bufsize": 135168 00:05:13.705 } 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "sock", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "sock_set_default_impl", 00:05:13.705 "params": { 00:05:13.705 "impl_name": "posix" 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "sock_impl_set_options", 00:05:13.705 "params": { 00:05:13.705 "impl_name": "ssl", 00:05:13.705 "recv_buf_size": 4096, 00:05:13.705 "send_buf_size": 4096, 00:05:13.705 "enable_recv_pipe": true, 00:05:13.705 "enable_quickack": false, 00:05:13.705 "enable_placement_id": 0, 00:05:13.705 "enable_zerocopy_send_server": true, 00:05:13.705 "enable_zerocopy_send_client": false, 00:05:13.705 "zerocopy_threshold": 0, 00:05:13.705 "tls_version": 0, 00:05:13.705 "enable_ktls": false 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "sock_impl_set_options", 00:05:13.705 "params": { 00:05:13.705 "impl_name": "posix", 00:05:13.705 "recv_buf_size": 2097152, 00:05:13.705 "send_buf_size": 2097152, 00:05:13.705 "enable_recv_pipe": true, 00:05:13.705 "enable_quickack": false, 00:05:13.705 "enable_placement_id": 0, 00:05:13.705 "enable_zerocopy_send_server": true, 00:05:13.705 "enable_zerocopy_send_client": false, 00:05:13.705 "zerocopy_threshold": 0, 00:05:13.705 "tls_version": 0, 00:05:13.705 "enable_ktls": false 00:05:13.705 } 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "vmd", 00:05:13.705 "config": [] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "accel", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "accel_set_options", 00:05:13.705 "params": { 00:05:13.705 "small_cache_size": 128, 00:05:13.705 "large_cache_size": 16, 00:05:13.705 "task_count": 2048, 00:05:13.705 "sequence_count": 2048, 00:05:13.705 "buf_count": 2048 00:05:13.705 } 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "bdev", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "bdev_set_options", 00:05:13.705 "params": { 00:05:13.705 "bdev_io_pool_size": 65535, 00:05:13.705 "bdev_io_cache_size": 256, 00:05:13.705 "bdev_auto_examine": true, 00:05:13.705 "iobuf_small_cache_size": 128, 00:05:13.705 "iobuf_large_cache_size": 16 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "bdev_raid_set_options", 00:05:13.705 "params": { 00:05:13.705 "process_window_size_kb": 1024 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "bdev_iscsi_set_options", 00:05:13.705 "params": { 00:05:13.705 "timeout_sec": 30 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "bdev_nvme_set_options", 00:05:13.705 "params": { 00:05:13.705 "action_on_timeout": "none", 00:05:13.705 "timeout_us": 0, 00:05:13.705 "timeout_admin_us": 0, 00:05:13.705 "keep_alive_timeout_ms": 10000, 00:05:13.705 "arbitration_burst": 0, 00:05:13.705 "low_priority_weight": 0, 00:05:13.705 "medium_priority_weight": 0, 00:05:13.705 "high_priority_weight": 0, 00:05:13.705 "nvme_adminq_poll_period_us": 10000, 00:05:13.705 "nvme_ioq_poll_period_us": 0, 00:05:13.705 "io_queue_requests": 0, 00:05:13.705 "delay_cmd_submit": true, 00:05:13.705 "transport_retry_count": 4, 00:05:13.705 "bdev_retry_count": 3, 00:05:13.705 "transport_ack_timeout": 0, 00:05:13.705 "ctrlr_loss_timeout_sec": 0, 00:05:13.705 "reconnect_delay_sec": 0, 00:05:13.705 "fast_io_fail_timeout_sec": 0, 00:05:13.705 "disable_auto_failback": false, 00:05:13.705 "generate_uuids": false, 00:05:13.705 "transport_tos": 0, 00:05:13.705 "nvme_error_stat": false, 00:05:13.705 "rdma_srq_size": 0, 00:05:13.705 "io_path_stat": false, 00:05:13.705 "allow_accel_sequence": false, 00:05:13.705 "rdma_max_cq_size": 0, 00:05:13.705 "rdma_cm_event_timeout_ms": 0, 00:05:13.705 "dhchap_digests": [ 00:05:13.705 "sha256", 00:05:13.705 "sha384", 00:05:13.705 "sha512" 00:05:13.705 ], 00:05:13.705 "dhchap_dhgroups": [ 00:05:13.705 "null", 00:05:13.705 "ffdhe2048", 00:05:13.705 "ffdhe3072", 00:05:13.705 "ffdhe4096", 00:05:13.705 "ffdhe6144", 00:05:13.705 "ffdhe8192" 00:05:13.705 ] 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "bdev_nvme_set_hotplug", 00:05:13.705 "params": { 00:05:13.705 "period_us": 100000, 00:05:13.705 "enable": false 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "bdev_wait_for_examine" 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "scsi", 00:05:13.705 "config": null 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "scheduler", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "framework_set_scheduler", 00:05:13.705 "params": { 00:05:13.705 "name": "static" 00:05:13.705 } 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "vhost_scsi", 00:05:13.705 "config": [] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "vhost_blk", 00:05:13.705 "config": [] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "ublk", 00:05:13.705 "config": [] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "nbd", 00:05:13.705 "config": [] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "nvmf", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "nvmf_set_config", 00:05:13.705 "params": { 00:05:13.705 "discovery_filter": "match_any", 00:05:13.705 "admin_cmd_passthru": { 00:05:13.705 "identify_ctrlr": false 00:05:13.705 } 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "nvmf_set_max_subsystems", 00:05:13.705 "params": { 00:05:13.705 "max_subsystems": 1024 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "nvmf_set_crdt", 00:05:13.705 "params": { 00:05:13.705 "crdt1": 0, 00:05:13.705 "crdt2": 0, 00:05:13.705 "crdt3": 0 00:05:13.705 } 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "method": "nvmf_create_transport", 00:05:13.705 "params": { 00:05:13.705 "trtype": "TCP", 00:05:13.705 "max_queue_depth": 128, 00:05:13.705 "max_io_qpairs_per_ctrlr": 127, 00:05:13.705 "in_capsule_data_size": 4096, 00:05:13.705 "max_io_size": 131072, 00:05:13.705 "io_unit_size": 131072, 00:05:13.705 "max_aq_depth": 128, 00:05:13.705 "num_shared_buffers": 511, 00:05:13.705 "buf_cache_size": 4294967295, 00:05:13.705 "dif_insert_or_strip": false, 00:05:13.705 "zcopy": false, 00:05:13.705 "c2h_success": true, 00:05:13.705 "sock_priority": 0, 00:05:13.705 "abort_timeout_sec": 1, 00:05:13.705 "ack_timeout": 0, 00:05:13.705 "data_wr_pool_size": 0 00:05:13.705 } 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 }, 00:05:13.705 { 00:05:13.705 "subsystem": "iscsi", 00:05:13.705 "config": [ 00:05:13.705 { 00:05:13.705 "method": "iscsi_set_options", 00:05:13.705 "params": { 00:05:13.705 "node_base": "iqn.2016-06.io.spdk", 00:05:13.705 "max_sessions": 128, 00:05:13.705 "max_connections_per_session": 2, 00:05:13.705 "max_queue_depth": 64, 00:05:13.705 "default_time2wait": 2, 00:05:13.705 "default_time2retain": 20, 00:05:13.705 "first_burst_length": 8192, 00:05:13.705 "immediate_data": true, 00:05:13.705 "allow_duplicated_isid": false, 00:05:13.705 "error_recovery_level": 0, 00:05:13.705 "nop_timeout": 60, 00:05:13.705 "nop_in_interval": 30, 00:05:13.705 "disable_chap": false, 00:05:13.705 "require_chap": false, 00:05:13.705 "mutual_chap": false, 00:05:13.705 "chap_group": 0, 00:05:13.705 "max_large_datain_per_connection": 64, 00:05:13.705 "max_r2t_per_connection": 4, 00:05:13.705 "pdu_pool_size": 36864, 00:05:13.705 "immediate_data_pool_size": 16384, 00:05:13.705 "data_out_pool_size": 2048 00:05:13.705 } 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 } 00:05:13.705 ] 00:05:13.705 } 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 936485 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 936485 ']' 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 936485 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936485 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.705 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936485' 00:05:13.706 killing process with pid 936485 00:05:13.706 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 936485 00:05:13.706 07:32:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 936485 00:05:16.242 07:32:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=937003 00:05:16.242 07:32:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.242 07:32:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 937003 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 937003 ']' 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 937003 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937003 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937003' 00:05:21.539 killing process with pid 937003 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 937003 00:05:21.539 07:32:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 937003 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.076 00:05:24.076 real 0m11.540s 00:05:24.076 user 0m11.020s 00:05:24.076 sys 0m1.030s 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.076 ************************************ 00:05:24.076 END TEST skip_rpc_with_json 00:05:24.076 ************************************ 00:05:24.076 07:32:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.076 07:32:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.076 07:32:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.076 07:32:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.076 07:32:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.076 ************************************ 00:05:24.076 START TEST skip_rpc_with_delay 00:05:24.076 ************************************ 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.076 [2024-07-15 07:32:14.934324] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.076 [2024-07-15 07:32:14.934494] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.076 00:05:24.076 real 0m0.139s 00:05:24.076 user 0m0.075s 00:05:24.076 sys 0m0.063s 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.076 07:32:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.076 ************************************ 00:05:24.076 END TEST skip_rpc_with_delay 00:05:24.076 ************************************ 00:05:24.076 07:32:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.076 07:32:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.076 07:32:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.076 07:32:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.076 07:32:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.076 07:32:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.076 07:32:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.076 ************************************ 00:05:24.076 START TEST exit_on_failed_rpc_init 00:05:24.076 ************************************ 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=937895 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 937895 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 937895 ']' 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.076 07:32:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.076 [2024-07-15 07:32:15.128950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:24.076 [2024-07-15 07:32:15.129104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937895 ] 00:05:24.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.076 [2024-07-15 07:32:15.284623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.335 [2024-07-15 07:32:15.514004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.272 07:32:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.272 [2024-07-15 07:32:16.401031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:25.272 [2024-07-15 07:32:16.401180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938132 ] 00:05:25.272 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.530 [2024-07-15 07:32:16.535150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.790 [2024-07-15 07:32:16.786007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.790 [2024-07-15 07:32:16.786175] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.790 [2024-07-15 07:32:16.786223] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.790 [2024-07-15 07:32:16.786247] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 937895 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 937895 ']' 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 937895 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.050 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937895 00:05:26.310 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.310 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.310 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937895' 00:05:26.310 killing process with pid 937895 00:05:26.310 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 937895 00:05:26.310 07:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 937895 00:05:28.843 00:05:28.843 real 0m4.737s 00:05:28.843 user 0m5.474s 00:05:28.843 sys 0m0.750s 00:05:28.843 07:32:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.843 07:32:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.843 ************************************ 00:05:28.843 END TEST exit_on_failed_rpc_init 00:05:28.843 ************************************ 00:05:28.843 07:32:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:28.843 07:32:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.843 00:05:28.843 real 0m24.187s 00:05:28.843 user 0m23.692s 00:05:28.843 sys 0m2.488s 00:05:28.843 07:32:19 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.843 07:32:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.843 ************************************ 00:05:28.843 END TEST skip_rpc 00:05:28.843 ************************************ 00:05:28.843 07:32:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.843 07:32:19 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.843 07:32:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.843 07:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.843 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.843 ************************************ 00:05:28.843 START TEST rpc_client 00:05:28.843 ************************************ 00:05:28.843 07:32:19 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.843 * Looking for test storage... 00:05:28.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:28.843 07:32:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:28.843 OK 00:05:28.843 07:32:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.843 00:05:28.843 real 0m0.087s 00:05:28.843 user 0m0.041s 00:05:28.843 sys 0m0.051s 00:05:28.843 07:32:19 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.843 07:32:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.843 ************************************ 00:05:28.843 END TEST rpc_client 00:05:28.843 ************************************ 00:05:28.843 07:32:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.843 07:32:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.843 07:32:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.843 07:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.843 07:32:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.843 ************************************ 00:05:28.843 START TEST json_config 00:05:28.843 ************************************ 00:05:28.843 07:32:19 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.843 07:32:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.843 07:32:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.844 07:32:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.844 07:32:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.844 07:32:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.844 07:32:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.844 07:32:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.844 07:32:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.844 07:32:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.844 07:32:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@47 -- # : 0 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.844 07:32:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:28.844 INFO: JSON configuration test init 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 07:32:20 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.844 07:32:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.844 07:32:20 json_config -- json_config/common.sh@10 -- # shift 00:05:28.844 07:32:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.844 07:32:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.844 07:32:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.844 07:32:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.844 07:32:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.844 07:32:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=938671 00:05:28.844 07:32:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.844 Waiting for target to run... 00:05:28.844 07:32:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.844 07:32:20 json_config -- json_config/common.sh@25 -- # waitforlisten 938671 /var/tmp/spdk_tgt.sock 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@829 -- # '[' -z 938671 ']' 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.844 07:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.104 [2024-07-15 07:32:20.128154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:29.104 [2024-07-15 07:32:20.128334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938671 ] 00:05:29.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.362 [2024-07-15 07:32:20.567765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.620 [2024-07-15 07:32:20.793524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.877 07:32:21 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.878 07:32:21 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:29.878 07:32:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.878 00:05:29.878 07:32:21 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:29.878 07:32:21 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:29.878 07:32:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.878 07:32:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.878 07:32:21 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:29.878 07:32:21 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:29.878 07:32:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.878 07:32:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.878 07:32:21 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.878 07:32:21 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:29.878 07:32:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:34.099 07:32:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.099 07:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:34.099 07:32:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:34.099 07:32:24 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:34.100 07:32:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.100 07:32:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:34.100 07:32:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.100 07:32:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:34.100 07:32:25 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.100 07:32:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.358 MallocForNvmf0 00:05:34.358 07:32:25 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.358 07:32:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.617 MallocForNvmf1 00:05:34.617 07:32:25 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.617 07:32:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.875 [2024-07-15 07:32:25.872382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.875 07:32:25 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.875 07:32:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.134 07:32:26 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.134 07:32:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.392 07:32:26 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.392 07:32:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.651 07:32:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.651 07:32:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.651 [2024-07-15 07:32:26.851778] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.651 07:32:26 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:35.651 07:32:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.651 07:32:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.908 07:32:26 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:35.908 07:32:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.908 07:32:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.908 07:32:26 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:35.908 07:32:26 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.908 07:32:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.166 MallocBdevForConfigChangeCheck 00:05:36.166 07:32:27 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:36.166 07:32:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.166 07:32:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.166 07:32:27 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:36.166 07:32:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.424 07:32:27 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:36.424 INFO: shutting down applications... 00:05:36.424 07:32:27 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:36.424 07:32:27 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:36.424 07:32:27 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:36.424 07:32:27 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:38.324 Calling clear_iscsi_subsystem 00:05:38.324 Calling clear_nvmf_subsystem 00:05:38.324 Calling clear_nbd_subsystem 00:05:38.324 Calling clear_ublk_subsystem 00:05:38.324 Calling clear_vhost_blk_subsystem 00:05:38.324 Calling clear_vhost_scsi_subsystem 00:05:38.324 Calling clear_bdev_subsystem 00:05:38.324 07:32:29 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:38.324 07:32:29 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:38.324 07:32:29 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:38.324 07:32:29 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.324 07:32:29 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:38.324 07:32:29 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:38.583 07:32:29 json_config -- json_config/json_config.sh@345 -- # break 00:05:38.583 07:32:29 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:38.583 07:32:29 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:38.583 07:32:29 json_config -- json_config/common.sh@31 -- # local app=target 00:05:38.583 07:32:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.583 07:32:29 json_config -- json_config/common.sh@35 -- # [[ -n 938671 ]] 00:05:38.583 07:32:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 938671 00:05:38.583 07:32:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.583 07:32:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.583 07:32:29 json_config -- json_config/common.sh@41 -- # kill -0 938671 00:05:38.583 07:32:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.149 07:32:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.149 07:32:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.149 07:32:30 json_config -- json_config/common.sh@41 -- # kill -0 938671 00:05:39.149 07:32:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.407 07:32:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.407 07:32:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.407 07:32:30 json_config -- json_config/common.sh@41 -- # kill -0 938671 00:05:39.407 07:32:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.975 07:32:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.975 07:32:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.975 07:32:31 json_config -- json_config/common.sh@41 -- # kill -0 938671 00:05:39.975 07:32:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.975 07:32:31 json_config -- json_config/common.sh@43 -- # break 00:05:39.975 07:32:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.975 07:32:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.975 SPDK target shutdown done 00:05:39.975 07:32:31 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:39.975 INFO: relaunching applications... 00:05:39.975 07:32:31 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.975 07:32:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:39.975 07:32:31 json_config -- json_config/common.sh@10 -- # shift 00:05:39.975 07:32:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.975 07:32:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.975 07:32:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.975 07:32:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.975 07:32:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.975 07:32:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=940127 00:05:39.975 07:32:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.975 07:32:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.975 Waiting for target to run... 00:05:39.975 07:32:31 json_config -- json_config/common.sh@25 -- # waitforlisten 940127 /var/tmp/spdk_tgt.sock 00:05:39.975 07:32:31 json_config -- common/autotest_common.sh@829 -- # '[' -z 940127 ']' 00:05:39.975 07:32:31 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.975 07:32:31 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.975 07:32:31 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.975 07:32:31 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.975 07:32:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 [2024-07-15 07:32:31.214808] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:40.234 [2024-07-15 07:32:31.214987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940127 ] 00:05:40.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.802 [2024-07-15 07:32:31.834350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.060 [2024-07-15 07:32:32.071700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.237 [2024-07-15 07:32:35.796859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.237 [2024-07-15 07:32:35.829445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.237 07:32:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.237 07:32:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:45.237 07:32:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.237 00:05:45.237 07:32:36 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:45.237 07:32:36 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.237 INFO: Checking if target configuration is the same... 00:05:45.237 07:32:36 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.237 07:32:36 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:45.237 07:32:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.237 + '[' 2 -ne 2 ']' 00:05:45.237 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.237 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.237 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.237 +++ basename /dev/fd/62 00:05:45.237 ++ mktemp /tmp/62.XXX 00:05:45.237 + tmp_file_1=/tmp/62.ZPF 00:05:45.237 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.237 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.237 + tmp_file_2=/tmp/spdk_tgt_config.json.S72 00:05:45.237 + ret=0 00:05:45.237 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.496 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.753 + diff -u /tmp/62.ZPF /tmp/spdk_tgt_config.json.S72 00:05:45.753 + echo 'INFO: JSON config files are the same' 00:05:45.753 INFO: JSON config files are the same 00:05:45.753 + rm /tmp/62.ZPF /tmp/spdk_tgt_config.json.S72 00:05:45.753 + exit 0 00:05:45.753 07:32:36 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:45.753 07:32:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:45.753 INFO: changing configuration and checking if this can be detected... 00:05:45.753 07:32:36 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.753 07:32:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.010 07:32:37 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.010 07:32:37 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:46.010 07:32:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.010 + '[' 2 -ne 2 ']' 00:05:46.010 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.010 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.010 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.010 +++ basename /dev/fd/62 00:05:46.010 ++ mktemp /tmp/62.XXX 00:05:46.010 + tmp_file_1=/tmp/62.tPl 00:05:46.010 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.010 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.010 + tmp_file_2=/tmp/spdk_tgt_config.json.R5k 00:05:46.010 + ret=0 00:05:46.010 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.267 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.267 + diff -u /tmp/62.tPl /tmp/spdk_tgt_config.json.R5k 00:05:46.267 + ret=1 00:05:46.267 + echo '=== Start of file: /tmp/62.tPl ===' 00:05:46.267 + cat /tmp/62.tPl 00:05:46.267 + echo '=== End of file: /tmp/62.tPl ===' 00:05:46.267 + echo '' 00:05:46.267 + echo '=== Start of file: /tmp/spdk_tgt_config.json.R5k ===' 00:05:46.267 + cat /tmp/spdk_tgt_config.json.R5k 00:05:46.267 + echo '=== End of file: /tmp/spdk_tgt_config.json.R5k ===' 00:05:46.267 + echo '' 00:05:46.267 + rm /tmp/62.tPl /tmp/spdk_tgt_config.json.R5k 00:05:46.267 + exit 1 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:46.268 INFO: configuration change detected. 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@317 -- # [[ -n 940127 ]] 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.268 07:32:37 json_config -- json_config/json_config.sh@323 -- # killprocess 940127 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@948 -- # '[' -z 940127 ']' 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@952 -- # kill -0 940127 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@953 -- # uname 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940127 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940127' 00:05:46.268 killing process with pid 940127 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@967 -- # kill 940127 00:05:46.268 07:32:37 json_config -- common/autotest_common.sh@972 -- # wait 940127 00:05:48.794 07:32:39 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.794 07:32:39 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:48.794 07:32:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.794 07:32:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.794 07:32:39 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:48.794 07:32:39 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:48.794 INFO: Success 00:05:48.794 00:05:48.794 real 0m20.003s 00:05:48.794 user 0m21.474s 00:05:48.794 sys 0m2.504s 00:05:48.794 07:32:39 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.794 07:32:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.794 ************************************ 00:05:48.794 END TEST json_config 00:05:48.794 ************************************ 00:05:48.794 07:32:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.794 07:32:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.794 07:32:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.794 07:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.794 07:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:48.794 ************************************ 00:05:48.794 START TEST json_config_extra_key 00:05:48.794 ************************************ 00:05:48.794 07:32:40 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.053 07:32:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.053 07:32:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.053 07:32:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.053 07:32:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.053 07:32:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.053 07:32:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.053 07:32:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.053 07:32:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.053 07:32:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.053 INFO: launching applications... 00:05:49.053 07:32:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=941307 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.053 Waiting for target to run... 00:05:49.053 07:32:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 941307 /var/tmp/spdk_tgt.sock 00:05:49.053 07:32:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 941307 ']' 00:05:49.053 07:32:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.053 07:32:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.053 07:32:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.053 07:32:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.053 07:32:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.053 [2024-07-15 07:32:40.155476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:49.053 [2024-07-15 07:32:40.155627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941307 ] 00:05:49.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.619 [2024-07-15 07:32:40.572388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.619 [2024-07-15 07:32:40.797313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.585 07:32:41 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.585 07:32:41 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.585 00:05:50.585 07:32:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.585 INFO: shutting down applications... 00:05:50.585 07:32:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 941307 ]] 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 941307 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:50.585 07:32:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.848 07:32:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.848 07:32:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.848 07:32:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:50.848 07:32:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.413 07:32:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.413 07:32:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.413 07:32:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:51.413 07:32:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.980 07:32:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.980 07:32:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.980 07:32:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:51.980 07:32:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.545 07:32:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.545 07:32:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.545 07:32:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:52.545 07:32:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.803 07:32:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.803 07:32:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.803 07:32:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:52.803 07:32:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 941307 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:53.368 07:32:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:53.368 SPDK target shutdown done 00:05:53.368 07:32:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:53.368 Success 00:05:53.368 00:05:53.368 real 0m4.510s 00:05:53.368 user 0m4.219s 00:05:53.368 sys 0m0.648s 00:05:53.368 07:32:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.368 07:32:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.368 ************************************ 00:05:53.368 END TEST json_config_extra_key 00:05:53.368 ************************************ 00:05:53.368 07:32:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.368 07:32:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.368 07:32:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.368 07:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.368 07:32:44 -- common/autotest_common.sh@10 -- # set +x 00:05:53.368 ************************************ 00:05:53.368 START TEST alias_rpc 00:05:53.368 ************************************ 00:05:53.368 07:32:44 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.627 * Looking for test storage... 00:05:53.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:53.627 07:32:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.627 07:32:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=941890 00:05:53.627 07:32:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.627 07:32:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 941890 00:05:53.627 07:32:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 941890 ']' 00:05:53.627 07:32:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.627 07:32:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.627 07:32:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.627 07:32:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.627 07:32:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.627 [2024-07-15 07:32:44.717206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:53.627 [2024-07-15 07:32:44.717366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941890 ] 00:05:53.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.627 [2024-07-15 07:32:44.843181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.885 [2024-07-15 07:32:45.097341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.819 07:32:45 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.819 07:32:45 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.819 07:32:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:55.077 07:32:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 941890 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 941890 ']' 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 941890 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941890 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941890' 00:05:55.077 killing process with pid 941890 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@967 -- # kill 941890 00:05:55.077 07:32:46 alias_rpc -- common/autotest_common.sh@972 -- # wait 941890 00:05:57.606 00:05:57.606 real 0m4.190s 00:05:57.606 user 0m4.288s 00:05:57.606 sys 0m0.626s 00:05:57.606 07:32:48 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.606 07:32:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.606 ************************************ 00:05:57.606 END TEST alias_rpc 00:05:57.606 ************************************ 00:05:57.606 07:32:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.606 07:32:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:57.606 07:32:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:57.606 07:32:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.606 07:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.606 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:57.606 ************************************ 00:05:57.606 START TEST spdkcli_tcp 00:05:57.606 ************************************ 00:05:57.606 07:32:48 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:57.864 * Looking for test storage... 00:05:57.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=942386 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:57.864 07:32:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 942386 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 942386 ']' 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.864 07:32:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.864 [2024-07-15 07:32:48.967431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:57.864 [2024-07-15 07:32:48.967579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942386 ] 00:05:57.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.123 [2024-07-15 07:32:49.096203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.123 [2024-07-15 07:32:49.351909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.123 [2024-07-15 07:32:49.351914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.055 07:32:50 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.055 07:32:50 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:59.055 07:32:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=942616 00:05:59.055 07:32:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.055 07:32:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.313 [ 00:05:59.313 "bdev_malloc_delete", 00:05:59.313 "bdev_malloc_create", 00:05:59.313 "bdev_null_resize", 00:05:59.313 "bdev_null_delete", 00:05:59.313 "bdev_null_create", 00:05:59.313 "bdev_nvme_cuse_unregister", 00:05:59.313 "bdev_nvme_cuse_register", 00:05:59.313 "bdev_opal_new_user", 00:05:59.313 "bdev_opal_set_lock_state", 00:05:59.313 "bdev_opal_delete", 00:05:59.313 "bdev_opal_get_info", 00:05:59.313 "bdev_opal_create", 00:05:59.313 "bdev_nvme_opal_revert", 00:05:59.313 "bdev_nvme_opal_init", 00:05:59.313 "bdev_nvme_send_cmd", 00:05:59.313 "bdev_nvme_get_path_iostat", 00:05:59.313 "bdev_nvme_get_mdns_discovery_info", 00:05:59.313 "bdev_nvme_stop_mdns_discovery", 00:05:59.313 "bdev_nvme_start_mdns_discovery", 00:05:59.313 "bdev_nvme_set_multipath_policy", 00:05:59.313 "bdev_nvme_set_preferred_path", 00:05:59.313 "bdev_nvme_get_io_paths", 00:05:59.313 "bdev_nvme_remove_error_injection", 00:05:59.313 "bdev_nvme_add_error_injection", 00:05:59.313 "bdev_nvme_get_discovery_info", 00:05:59.313 "bdev_nvme_stop_discovery", 00:05:59.313 "bdev_nvme_start_discovery", 00:05:59.313 "bdev_nvme_get_controller_health_info", 00:05:59.313 "bdev_nvme_disable_controller", 00:05:59.313 "bdev_nvme_enable_controller", 00:05:59.313 "bdev_nvme_reset_controller", 00:05:59.313 "bdev_nvme_get_transport_statistics", 00:05:59.313 "bdev_nvme_apply_firmware", 00:05:59.313 "bdev_nvme_detach_controller", 00:05:59.313 "bdev_nvme_get_controllers", 00:05:59.313 "bdev_nvme_attach_controller", 00:05:59.313 "bdev_nvme_set_hotplug", 00:05:59.313 "bdev_nvme_set_options", 00:05:59.313 "bdev_passthru_delete", 00:05:59.313 "bdev_passthru_create", 00:05:59.313 "bdev_lvol_set_parent_bdev", 00:05:59.313 "bdev_lvol_set_parent", 00:05:59.313 "bdev_lvol_check_shallow_copy", 00:05:59.313 "bdev_lvol_start_shallow_copy", 00:05:59.313 "bdev_lvol_grow_lvstore", 00:05:59.313 "bdev_lvol_get_lvols", 00:05:59.313 "bdev_lvol_get_lvstores", 00:05:59.313 "bdev_lvol_delete", 00:05:59.313 "bdev_lvol_set_read_only", 00:05:59.313 "bdev_lvol_resize", 00:05:59.313 "bdev_lvol_decouple_parent", 00:05:59.313 "bdev_lvol_inflate", 00:05:59.313 "bdev_lvol_rename", 00:05:59.313 "bdev_lvol_clone_bdev", 00:05:59.313 "bdev_lvol_clone", 00:05:59.313 "bdev_lvol_snapshot", 00:05:59.313 "bdev_lvol_create", 00:05:59.313 "bdev_lvol_delete_lvstore", 00:05:59.313 "bdev_lvol_rename_lvstore", 00:05:59.313 "bdev_lvol_create_lvstore", 00:05:59.313 "bdev_raid_set_options", 00:05:59.313 "bdev_raid_remove_base_bdev", 00:05:59.313 "bdev_raid_add_base_bdev", 00:05:59.313 "bdev_raid_delete", 00:05:59.313 "bdev_raid_create", 00:05:59.313 "bdev_raid_get_bdevs", 00:05:59.313 "bdev_error_inject_error", 00:05:59.313 "bdev_error_delete", 00:05:59.313 "bdev_error_create", 00:05:59.313 "bdev_split_delete", 00:05:59.313 "bdev_split_create", 00:05:59.313 "bdev_delay_delete", 00:05:59.313 "bdev_delay_create", 00:05:59.313 "bdev_delay_update_latency", 00:05:59.313 "bdev_zone_block_delete", 00:05:59.313 "bdev_zone_block_create", 00:05:59.313 "blobfs_create", 00:05:59.313 "blobfs_detect", 00:05:59.313 "blobfs_set_cache_size", 00:05:59.313 "bdev_aio_delete", 00:05:59.313 "bdev_aio_rescan", 00:05:59.313 "bdev_aio_create", 00:05:59.313 "bdev_ftl_set_property", 00:05:59.313 "bdev_ftl_get_properties", 00:05:59.313 "bdev_ftl_get_stats", 00:05:59.313 "bdev_ftl_unmap", 00:05:59.313 "bdev_ftl_unload", 00:05:59.313 "bdev_ftl_delete", 00:05:59.313 "bdev_ftl_load", 00:05:59.313 "bdev_ftl_create", 00:05:59.313 "bdev_virtio_attach_controller", 00:05:59.313 "bdev_virtio_scsi_get_devices", 00:05:59.314 "bdev_virtio_detach_controller", 00:05:59.314 "bdev_virtio_blk_set_hotplug", 00:05:59.314 "bdev_iscsi_delete", 00:05:59.314 "bdev_iscsi_create", 00:05:59.314 "bdev_iscsi_set_options", 00:05:59.314 "accel_error_inject_error", 00:05:59.314 "ioat_scan_accel_module", 00:05:59.314 "dsa_scan_accel_module", 00:05:59.314 "iaa_scan_accel_module", 00:05:59.314 "keyring_file_remove_key", 00:05:59.314 "keyring_file_add_key", 00:05:59.314 "keyring_linux_set_options", 00:05:59.314 "iscsi_get_histogram", 00:05:59.314 "iscsi_enable_histogram", 00:05:59.314 "iscsi_set_options", 00:05:59.314 "iscsi_get_auth_groups", 00:05:59.314 "iscsi_auth_group_remove_secret", 00:05:59.314 "iscsi_auth_group_add_secret", 00:05:59.314 "iscsi_delete_auth_group", 00:05:59.314 "iscsi_create_auth_group", 00:05:59.314 "iscsi_set_discovery_auth", 00:05:59.314 "iscsi_get_options", 00:05:59.314 "iscsi_target_node_request_logout", 00:05:59.314 "iscsi_target_node_set_redirect", 00:05:59.314 "iscsi_target_node_set_auth", 00:05:59.314 "iscsi_target_node_add_lun", 00:05:59.314 "iscsi_get_stats", 00:05:59.314 "iscsi_get_connections", 00:05:59.314 "iscsi_portal_group_set_auth", 00:05:59.314 "iscsi_start_portal_group", 00:05:59.314 "iscsi_delete_portal_group", 00:05:59.314 "iscsi_create_portal_group", 00:05:59.314 "iscsi_get_portal_groups", 00:05:59.314 "iscsi_delete_target_node", 00:05:59.314 "iscsi_target_node_remove_pg_ig_maps", 00:05:59.314 "iscsi_target_node_add_pg_ig_maps", 00:05:59.314 "iscsi_create_target_node", 00:05:59.314 "iscsi_get_target_nodes", 00:05:59.314 "iscsi_delete_initiator_group", 00:05:59.314 "iscsi_initiator_group_remove_initiators", 00:05:59.314 "iscsi_initiator_group_add_initiators", 00:05:59.314 "iscsi_create_initiator_group", 00:05:59.314 "iscsi_get_initiator_groups", 00:05:59.314 "nvmf_set_crdt", 00:05:59.314 "nvmf_set_config", 00:05:59.314 "nvmf_set_max_subsystems", 00:05:59.314 "nvmf_stop_mdns_prr", 00:05:59.314 "nvmf_publish_mdns_prr", 00:05:59.314 "nvmf_subsystem_get_listeners", 00:05:59.314 "nvmf_subsystem_get_qpairs", 00:05:59.314 "nvmf_subsystem_get_controllers", 00:05:59.314 "nvmf_get_stats", 00:05:59.314 "nvmf_get_transports", 00:05:59.314 "nvmf_create_transport", 00:05:59.314 "nvmf_get_targets", 00:05:59.314 "nvmf_delete_target", 00:05:59.314 "nvmf_create_target", 00:05:59.314 "nvmf_subsystem_allow_any_host", 00:05:59.314 "nvmf_subsystem_remove_host", 00:05:59.314 "nvmf_subsystem_add_host", 00:05:59.314 "nvmf_ns_remove_host", 00:05:59.314 "nvmf_ns_add_host", 00:05:59.314 "nvmf_subsystem_remove_ns", 00:05:59.314 "nvmf_subsystem_add_ns", 00:05:59.314 "nvmf_subsystem_listener_set_ana_state", 00:05:59.314 "nvmf_discovery_get_referrals", 00:05:59.314 "nvmf_discovery_remove_referral", 00:05:59.314 "nvmf_discovery_add_referral", 00:05:59.314 "nvmf_subsystem_remove_listener", 00:05:59.314 "nvmf_subsystem_add_listener", 00:05:59.314 "nvmf_delete_subsystem", 00:05:59.314 "nvmf_create_subsystem", 00:05:59.314 "nvmf_get_subsystems", 00:05:59.314 "env_dpdk_get_mem_stats", 00:05:59.314 "nbd_get_disks", 00:05:59.314 "nbd_stop_disk", 00:05:59.314 "nbd_start_disk", 00:05:59.314 "ublk_recover_disk", 00:05:59.314 "ublk_get_disks", 00:05:59.314 "ublk_stop_disk", 00:05:59.314 "ublk_start_disk", 00:05:59.314 "ublk_destroy_target", 00:05:59.314 "ublk_create_target", 00:05:59.314 "virtio_blk_create_transport", 00:05:59.314 "virtio_blk_get_transports", 00:05:59.314 "vhost_controller_set_coalescing", 00:05:59.314 "vhost_get_controllers", 00:05:59.314 "vhost_delete_controller", 00:05:59.314 "vhost_create_blk_controller", 00:05:59.314 "vhost_scsi_controller_remove_target", 00:05:59.314 "vhost_scsi_controller_add_target", 00:05:59.314 "vhost_start_scsi_controller", 00:05:59.314 "vhost_create_scsi_controller", 00:05:59.314 "thread_set_cpumask", 00:05:59.314 "framework_get_governor", 00:05:59.314 "framework_get_scheduler", 00:05:59.314 "framework_set_scheduler", 00:05:59.314 "framework_get_reactors", 00:05:59.314 "thread_get_io_channels", 00:05:59.314 "thread_get_pollers", 00:05:59.314 "thread_get_stats", 00:05:59.314 "framework_monitor_context_switch", 00:05:59.314 "spdk_kill_instance", 00:05:59.314 "log_enable_timestamps", 00:05:59.314 "log_get_flags", 00:05:59.314 "log_clear_flag", 00:05:59.314 "log_set_flag", 00:05:59.314 "log_get_level", 00:05:59.314 "log_set_level", 00:05:59.314 "log_get_print_level", 00:05:59.314 "log_set_print_level", 00:05:59.314 "framework_enable_cpumask_locks", 00:05:59.314 "framework_disable_cpumask_locks", 00:05:59.314 "framework_wait_init", 00:05:59.314 "framework_start_init", 00:05:59.314 "scsi_get_devices", 00:05:59.314 "bdev_get_histogram", 00:05:59.314 "bdev_enable_histogram", 00:05:59.314 "bdev_set_qos_limit", 00:05:59.314 "bdev_set_qd_sampling_period", 00:05:59.314 "bdev_get_bdevs", 00:05:59.314 "bdev_reset_iostat", 00:05:59.314 "bdev_get_iostat", 00:05:59.314 "bdev_examine", 00:05:59.314 "bdev_wait_for_examine", 00:05:59.314 "bdev_set_options", 00:05:59.314 "notify_get_notifications", 00:05:59.314 "notify_get_types", 00:05:59.314 "accel_get_stats", 00:05:59.314 "accel_set_options", 00:05:59.314 "accel_set_driver", 00:05:59.314 "accel_crypto_key_destroy", 00:05:59.314 "accel_crypto_keys_get", 00:05:59.314 "accel_crypto_key_create", 00:05:59.314 "accel_assign_opc", 00:05:59.314 "accel_get_module_info", 00:05:59.314 "accel_get_opc_assignments", 00:05:59.314 "vmd_rescan", 00:05:59.314 "vmd_remove_device", 00:05:59.314 "vmd_enable", 00:05:59.314 "sock_get_default_impl", 00:05:59.314 "sock_set_default_impl", 00:05:59.314 "sock_impl_set_options", 00:05:59.314 "sock_impl_get_options", 00:05:59.314 "iobuf_get_stats", 00:05:59.314 "iobuf_set_options", 00:05:59.314 "framework_get_pci_devices", 00:05:59.314 "framework_get_config", 00:05:59.314 "framework_get_subsystems", 00:05:59.314 "trace_get_info", 00:05:59.314 "trace_get_tpoint_group_mask", 00:05:59.314 "trace_disable_tpoint_group", 00:05:59.314 "trace_enable_tpoint_group", 00:05:59.314 "trace_clear_tpoint_mask", 00:05:59.314 "trace_set_tpoint_mask", 00:05:59.314 "keyring_get_keys", 00:05:59.314 "spdk_get_version", 00:05:59.314 "rpc_get_methods" 00:05:59.314 ] 00:05:59.314 07:32:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.314 07:32:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:59.314 07:32:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 942386 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 942386 ']' 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 942386 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942386 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942386' 00:05:59.314 killing process with pid 942386 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 942386 00:05:59.314 07:32:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 942386 00:06:01.841 00:06:01.841 real 0m3.965s 00:06:01.841 user 0m6.941s 00:06:01.841 sys 0m0.690s 00:06:01.841 07:32:52 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.842 07:32:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.842 ************************************ 00:06:01.842 END TEST spdkcli_tcp 00:06:01.842 ************************************ 00:06:01.842 07:32:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.842 07:32:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.842 07:32:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.842 07:32:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.842 07:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:01.842 ************************************ 00:06:01.842 START TEST dpdk_mem_utility 00:06:01.842 ************************************ 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.842 * Looking for test storage... 00:06:01.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.842 07:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.842 07:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=942952 00:06:01.842 07:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.842 07:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 942952 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 942952 ']' 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.842 07:32:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.842 [2024-07-15 07:32:52.974102] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:01.842 [2024-07-15 07:32:52.974273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942952 ] 00:06:01.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.100 [2024-07-15 07:32:53.098186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.358 [2024-07-15 07:32:53.351712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.293 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.293 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:03.293 07:32:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.293 07:32:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.293 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.293 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.293 { 00:06:03.293 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.293 } 00:06:03.293 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.293 07:32:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.293 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:03.293 1 heaps totaling size 820.000000 MiB 00:06:03.293 size: 820.000000 MiB heap id: 0 00:06:03.293 end heaps---------- 00:06:03.293 8 mempools totaling size 598.116089 MiB 00:06:03.293 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.293 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.293 size: 84.521057 MiB name: bdev_io_942952 00:06:03.293 size: 51.011292 MiB name: evtpool_942952 00:06:03.293 size: 50.003479 MiB name: msgpool_942952 00:06:03.293 size: 21.763794 MiB name: PDU_Pool 00:06:03.293 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.293 size: 0.026123 MiB name: Session_Pool 00:06:03.293 end mempools------- 00:06:03.293 6 memzones totaling size 4.142822 MiB 00:06:03.293 size: 1.000366 MiB name: RG_ring_0_942952 00:06:03.293 size: 1.000366 MiB name: RG_ring_1_942952 00:06:03.293 size: 1.000366 MiB name: RG_ring_4_942952 00:06:03.293 size: 1.000366 MiB name: RG_ring_5_942952 00:06:03.293 size: 0.125366 MiB name: RG_ring_2_942952 00:06:03.293 size: 0.015991 MiB name: RG_ring_3_942952 00:06:03.293 end memzones------- 00:06:03.293 07:32:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.293 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:03.293 list of free elements. size: 18.514832 MiB 00:06:03.293 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:03.293 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:03.293 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:03.293 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:03.293 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:03.293 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:03.293 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:03.293 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:03.293 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:03.293 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:03.293 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:03.293 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:03.293 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:03.293 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:03.293 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:03.293 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:03.293 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:03.293 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:03.293 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:03.293 list of standard malloc elements. size: 199.220764 MiB 00:06:03.293 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:03.293 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:03.293 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:03.293 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:03.293 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:03.293 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:03.294 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:03.294 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:03.294 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:03.294 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:03.294 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:03.294 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:03.294 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:03.294 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:03.294 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:03.294 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:03.294 list of memzone associated elements. size: 602.264404 MiB 00:06:03.294 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:03.294 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.294 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:03.294 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.294 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:03.294 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_942952_0 00:06:03.294 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:03.294 associated memzone info: size: 48.002930 MiB name: MP_evtpool_942952_0 00:06:03.294 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:03.294 associated memzone info: size: 48.002930 MiB name: MP_msgpool_942952_0 00:06:03.294 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:03.294 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.294 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:03.294 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.294 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:03.294 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_942952 00:06:03.294 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:03.294 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_942952 00:06:03.294 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:03.294 associated memzone info: size: 1.007996 MiB name: MP_evtpool_942952 00:06:03.294 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:03.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.294 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:03.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.294 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:03.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.294 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:03.294 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.294 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:03.294 associated memzone info: size: 1.000366 MiB name: RG_ring_0_942952 00:06:03.294 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:03.294 associated memzone info: size: 1.000366 MiB name: RG_ring_1_942952 00:06:03.294 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:03.294 associated memzone info: size: 1.000366 MiB name: RG_ring_4_942952 00:06:03.294 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:03.294 associated memzone info: size: 1.000366 MiB name: RG_ring_5_942952 00:06:03.294 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:03.294 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_942952 00:06:03.294 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:03.294 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.294 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:03.294 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.294 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:03.294 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.294 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:03.294 associated memzone info: size: 0.125366 MiB name: RG_ring_2_942952 00:06:03.294 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:03.294 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.294 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:03.294 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.294 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:03.294 associated memzone info: size: 0.015991 MiB name: RG_ring_3_942952 00:06:03.294 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:03.294 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.294 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:03.294 associated memzone info: size: 0.000183 MiB name: MP_msgpool_942952 00:06:03.294 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:03.294 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_942952 00:06:03.294 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:03.294 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.294 07:32:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.294 07:32:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 942952 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 942952 ']' 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 942952 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942952 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942952' 00:06:03.294 killing process with pid 942952 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 942952 00:06:03.294 07:32:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 942952 00:06:05.824 00:06:05.824 real 0m4.030s 00:06:05.824 user 0m4.049s 00:06:05.824 sys 0m0.587s 00:06:05.824 07:32:56 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.824 07:32:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 ************************************ 00:06:05.824 END TEST dpdk_mem_utility 00:06:05.824 ************************************ 00:06:05.824 07:32:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.824 07:32:56 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.824 07:32:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.824 07:32:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.824 07:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 ************************************ 00:06:05.824 START TEST event 00:06:05.824 ************************************ 00:06:05.824 07:32:56 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.824 * Looking for test storage... 00:06:05.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.824 07:32:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:05.824 07:32:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.824 07:32:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.824 07:32:56 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.824 07:32:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.824 07:32:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 ************************************ 00:06:05.824 START TEST event_perf 00:06:05.824 ************************************ 00:06:05.824 07:32:56 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.824 Running I/O for 1 seconds...[2024-07-15 07:32:57.013507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:05.824 [2024-07-15 07:32:57.013619] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943539 ] 00:06:06.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.083 [2024-07-15 07:32:57.143489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.370 [2024-07-15 07:32:57.411750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.370 [2024-07-15 07:32:57.411804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.370 [2024-07-15 07:32:57.411848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.370 [2024-07-15 07:32:57.411859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.746 Running I/O for 1 seconds... 00:06:07.746 lcore 0: 191255 00:06:07.746 lcore 1: 191253 00:06:07.746 lcore 2: 191252 00:06:07.746 lcore 3: 191253 00:06:07.746 done. 00:06:07.746 00:06:07.746 real 0m1.897s 00:06:07.746 user 0m4.702s 00:06:07.746 sys 0m0.176s 00:06:07.746 07:32:58 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.746 07:32:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.746 ************************************ 00:06:07.746 END TEST event_perf 00:06:07.746 ************************************ 00:06:07.746 07:32:58 event -- common/autotest_common.sh@1142 -- # return 0 00:06:07.746 07:32:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.746 07:32:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.746 07:32:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.746 07:32:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.746 ************************************ 00:06:07.746 START TEST event_reactor 00:06:07.746 ************************************ 00:06:07.746 07:32:58 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.746 [2024-07-15 07:32:58.954685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:07.746 [2024-07-15 07:32:58.954796] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943706 ] 00:06:08.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.005 [2024-07-15 07:32:59.085860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.262 [2024-07-15 07:32:59.348422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.635 test_start 00:06:09.635 oneshot 00:06:09.635 tick 100 00:06:09.635 tick 100 00:06:09.635 tick 250 00:06:09.635 tick 100 00:06:09.635 tick 100 00:06:09.635 tick 100 00:06:09.635 tick 250 00:06:09.635 tick 500 00:06:09.635 tick 100 00:06:09.635 tick 100 00:06:09.635 tick 250 00:06:09.635 tick 100 00:06:09.635 tick 100 00:06:09.635 test_end 00:06:09.635 00:06:09.635 real 0m1.877s 00:06:09.635 user 0m1.713s 00:06:09.635 sys 0m0.153s 00:06:09.635 07:33:00 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.635 07:33:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.635 ************************************ 00:06:09.635 END TEST event_reactor 00:06:09.635 ************************************ 00:06:09.635 07:33:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.635 07:33:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.635 07:33:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.635 07:33:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.635 07:33:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.635 ************************************ 00:06:09.635 START TEST event_reactor_perf 00:06:09.635 ************************************ 00:06:09.635 07:33:00 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.893 [2024-07-15 07:33:00.881985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:09.893 [2024-07-15 07:33:00.882124] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944046 ] 00:06:09.893 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.893 [2024-07-15 07:33:01.012498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.152 [2024-07-15 07:33:01.273338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.523 test_start 00:06:11.523 test_end 00:06:11.523 Performance: 267423 events per second 00:06:11.523 00:06:11.523 real 0m1.885s 00:06:11.523 user 0m1.703s 00:06:11.523 sys 0m0.169s 00:06:11.523 07:33:02 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.523 07:33:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.523 ************************************ 00:06:11.523 END TEST event_reactor_perf 00:06:11.523 ************************************ 00:06:11.523 07:33:02 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.781 07:33:02 event -- event/event.sh@49 -- # uname -s 00:06:11.781 07:33:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.781 07:33:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.781 07:33:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.781 07:33:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.781 07:33:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.781 ************************************ 00:06:11.781 START TEST event_scheduler 00:06:11.781 ************************************ 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.781 * Looking for test storage... 00:06:11.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:11.781 07:33:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.781 07:33:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=944410 00:06:11.781 07:33:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.781 07:33:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.781 07:33:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 944410 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 944410 ']' 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.781 07:33:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.781 [2024-07-15 07:33:02.916071] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.782 [2024-07-15 07:33:02.916227] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944410 ] 00:06:11.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.040 [2024-07-15 07:33:03.043155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.040 [2024-07-15 07:33:03.263940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.040 [2024-07-15 07:33:03.263996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.040 [2024-07-15 07:33:03.264038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.040 [2024-07-15 07:33:03.264045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:12.968 07:33:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.968 [2024-07-15 07:33:03.854769] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:12.968 [2024-07-15 07:33:03.854829] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.968 [2024-07-15 07:33:03.854862] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.968 [2024-07-15 07:33:03.854909] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.968 [2024-07-15 07:33:03.854929] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.968 07:33:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.968 07:33:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.968 [2024-07-15 07:33:04.159137] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.968 07:33:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.968 07:33:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.968 07:33:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.968 07:33:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.968 07:33:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.968 ************************************ 00:06:12.968 START TEST scheduler_create_thread 00:06:12.968 ************************************ 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.968 2 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.968 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.224 3 00:06:13.224 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.224 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:13.224 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.224 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 4 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 5 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 6 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 7 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 8 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 9 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 10 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.225 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.789 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.789 00:06:13.789 real 0m0.592s 00:06:13.789 user 0m0.008s 00:06:13.789 sys 0m0.005s 00:06:13.789 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.789 07:33:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.789 ************************************ 00:06:13.789 END TEST scheduler_create_thread 00:06:13.789 ************************************ 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:13.789 07:33:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:13.789 07:33:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 944410 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 944410 ']' 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 944410 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944410 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944410' 00:06:13.789 killing process with pid 944410 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 944410 00:06:13.789 07:33:04 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 944410 00:06:14.045 [2024-07-15 07:33:05.259456] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.423 00:06:15.423 real 0m3.633s 00:06:15.423 user 0m6.978s 00:06:15.423 sys 0m0.515s 00:06:15.423 07:33:06 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.423 07:33:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.423 ************************************ 00:06:15.423 END TEST event_scheduler 00:06:15.423 ************************************ 00:06:15.423 07:33:06 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.423 07:33:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.423 07:33:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.423 07:33:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.423 07:33:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.423 07:33:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.423 ************************************ 00:06:15.423 START TEST app_repeat 00:06:15.423 ************************************ 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=944866 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 944866' 00:06:15.424 Process app_repeat pid: 944866 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.424 spdk_app_start Round 0 00:06:15.424 07:33:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 944866 /var/tmp/spdk-nbd.sock 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 944866 ']' 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.424 07:33:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.424 [2024-07-15 07:33:06.525033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:15.424 [2024-07-15 07:33:06.525201] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944866 ] 00:06:15.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.681 [2024-07-15 07:33:06.656127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.940 [2024-07-15 07:33:06.916446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.940 [2024-07-15 07:33:06.916453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.505 07:33:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.505 07:33:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.505 07:33:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.763 Malloc0 00:06:16.763 07:33:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.021 Malloc1 00:06:17.022 07:33:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.022 07:33:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.297 /dev/nbd0 00:06:17.297 07:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.297 07:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.297 1+0 records in 00:06:17.297 1+0 records out 00:06:17.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344675 s, 11.9 MB/s 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.297 07:33:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.297 07:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.297 07:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.297 07:33:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.555 /dev/nbd1 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.555 1+0 records in 00:06:17.555 1+0 records out 00:06:17.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219478 s, 18.7 MB/s 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.555 07:33:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.555 07:33:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.813 07:33:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.813 { 00:06:17.813 "nbd_device": "/dev/nbd0", 00:06:17.813 "bdev_name": "Malloc0" 00:06:17.813 }, 00:06:17.813 { 00:06:17.813 "nbd_device": "/dev/nbd1", 00:06:17.813 "bdev_name": "Malloc1" 00:06:17.813 } 00:06:17.813 ]' 00:06:17.813 07:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.813 { 00:06:17.813 "nbd_device": "/dev/nbd0", 00:06:17.813 "bdev_name": "Malloc0" 00:06:17.813 }, 00:06:17.813 { 00:06:17.813 "nbd_device": "/dev/nbd1", 00:06:17.813 "bdev_name": "Malloc1" 00:06:17.813 } 00:06:17.813 ]' 00:06:17.813 07:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.072 /dev/nbd1' 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.072 /dev/nbd1' 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.072 256+0 records in 00:06:18.072 256+0 records out 00:06:18.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404009 s, 260 MB/s 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.072 256+0 records in 00:06:18.072 256+0 records out 00:06:18.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277461 s, 37.8 MB/s 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.072 256+0 records in 00:06:18.072 256+0 records out 00:06:18.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321303 s, 32.6 MB/s 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.072 07:33:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.073 07:33:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.331 07:33:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.590 07:33:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.848 07:33:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.848 07:33:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.416 07:33:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.792 [2024-07-15 07:33:11.808278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.075 [2024-07-15 07:33:12.062009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.075 [2024-07-15 07:33:12.062013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.075 [2024-07-15 07:33:12.283342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.075 [2024-07-15 07:33:12.283429] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.450 07:33:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.450 07:33:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:22.450 spdk_app_start Round 1 00:06:22.450 07:33:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 944866 /var/tmp/spdk-nbd.sock 00:06:22.450 07:33:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 944866 ']' 00:06:22.450 07:33:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.450 07:33:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.450 07:33:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.450 07:33:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.450 07:33:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.707 07:33:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.707 07:33:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:22.707 07:33:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.964 Malloc0 00:06:22.964 07:33:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.222 Malloc1 00:06:23.222 07:33:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.222 07:33:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.480 /dev/nbd0 00:06:23.480 07:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.480 07:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.480 1+0 records in 00:06:23.480 1+0 records out 00:06:23.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189143 s, 21.7 MB/s 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.480 07:33:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.480 07:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.480 07:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.480 07:33:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.738 /dev/nbd1 00:06:23.738 07:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.738 07:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.738 07:33:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.995 1+0 records in 00:06:23.995 1+0 records out 00:06:23.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196865 s, 20.8 MB/s 00:06:23.996 07:33:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.996 07:33:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.996 07:33:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.996 07:33:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.996 07:33:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.996 07:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.996 07:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.996 07:33:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.996 07:33:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.996 07:33:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.254 { 00:06:24.254 "nbd_device": "/dev/nbd0", 00:06:24.254 "bdev_name": "Malloc0" 00:06:24.254 }, 00:06:24.254 { 00:06:24.254 "nbd_device": "/dev/nbd1", 00:06:24.254 "bdev_name": "Malloc1" 00:06:24.254 } 00:06:24.254 ]' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.254 { 00:06:24.254 "nbd_device": "/dev/nbd0", 00:06:24.254 "bdev_name": "Malloc0" 00:06:24.254 }, 00:06:24.254 { 00:06:24.254 "nbd_device": "/dev/nbd1", 00:06:24.254 "bdev_name": "Malloc1" 00:06:24.254 } 00:06:24.254 ]' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.254 /dev/nbd1' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.254 /dev/nbd1' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.254 256+0 records in 00:06:24.254 256+0 records out 00:06:24.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501974 s, 209 MB/s 00:06:24.254 07:33:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.255 256+0 records in 00:06:24.255 256+0 records out 00:06:24.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241975 s, 43.3 MB/s 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.255 256+0 records in 00:06:24.255 256+0 records out 00:06:24.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301281 s, 34.8 MB/s 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.255 07:33:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.512 07:33:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.770 07:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.029 07:33:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.029 07:33:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.594 07:33:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.972 [2024-07-15 07:33:18.041974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.232 [2024-07-15 07:33:18.300738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.232 [2024-07-15 07:33:18.300738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.492 [2024-07-15 07:33:18.520643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.492 [2024-07-15 07:33:18.520771] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.431 07:33:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.431 07:33:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.431 spdk_app_start Round 2 00:06:28.431 07:33:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 944866 /var/tmp/spdk-nbd.sock 00:06:28.431 07:33:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 944866 ']' 00:06:28.431 07:33:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.431 07:33:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.431 07:33:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.431 07:33:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.431 07:33:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.689 07:33:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.689 07:33:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:28.689 07:33:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.257 Malloc0 00:06:29.257 07:33:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.515 Malloc1 00:06:29.515 07:33:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.515 07:33:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.773 /dev/nbd0 00:06:29.773 07:33:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.773 07:33:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.773 1+0 records in 00:06:29.773 1+0 records out 00:06:29.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179272 s, 22.8 MB/s 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.773 07:33:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:29.773 07:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.773 07:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.773 07:33:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.032 /dev/nbd1 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.032 1+0 records in 00:06:30.032 1+0 records out 00:06:30.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259874 s, 15.8 MB/s 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.032 07:33:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.032 07:33:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.290 { 00:06:30.290 "nbd_device": "/dev/nbd0", 00:06:30.290 "bdev_name": "Malloc0" 00:06:30.290 }, 00:06:30.290 { 00:06:30.290 "nbd_device": "/dev/nbd1", 00:06:30.290 "bdev_name": "Malloc1" 00:06:30.290 } 00:06:30.290 ]' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.290 { 00:06:30.290 "nbd_device": "/dev/nbd0", 00:06:30.290 "bdev_name": "Malloc0" 00:06:30.290 }, 00:06:30.290 { 00:06:30.290 "nbd_device": "/dev/nbd1", 00:06:30.290 "bdev_name": "Malloc1" 00:06:30.290 } 00:06:30.290 ]' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.290 /dev/nbd1' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.290 /dev/nbd1' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.290 256+0 records in 00:06:30.290 256+0 records out 00:06:30.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516862 s, 203 MB/s 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.290 256+0 records in 00:06:30.290 256+0 records out 00:06:30.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240529 s, 43.6 MB/s 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.290 256+0 records in 00:06:30.290 256+0 records out 00:06:30.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289072 s, 36.3 MB/s 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.290 07:33:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.549 07:33:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.808 07:33:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.066 07:33:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.324 07:33:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.324 07:33:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.582 07:33:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.961 [2024-07-15 07:33:24.188266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.221 [2024-07-15 07:33:24.443140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.221 [2024-07-15 07:33:24.443144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.481 [2024-07-15 07:33:24.661818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.481 [2024-07-15 07:33:24.661935] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.858 07:33:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 944866 /var/tmp/spdk-nbd.sock 00:06:34.858 07:33:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 944866 ']' 00:06:34.858 07:33:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.858 07:33:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.858 07:33:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.858 07:33:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.858 07:33:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:34.859 07:33:26 event.app_repeat -- event/event.sh@39 -- # killprocess 944866 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 944866 ']' 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 944866 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944866 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944866' 00:06:34.859 killing process with pid 944866 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@967 -- # kill 944866 00:06:34.859 07:33:26 event.app_repeat -- common/autotest_common.sh@972 -- # wait 944866 00:06:36.263 spdk_app_start is called in Round 0. 00:06:36.263 Shutdown signal received, stop current app iteration 00:06:36.263 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:36.263 spdk_app_start is called in Round 1. 00:06:36.263 Shutdown signal received, stop current app iteration 00:06:36.263 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:36.263 spdk_app_start is called in Round 2. 00:06:36.263 Shutdown signal received, stop current app iteration 00:06:36.263 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:36.263 spdk_app_start is called in Round 3. 00:06:36.263 Shutdown signal received, stop current app iteration 00:06:36.263 07:33:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.263 07:33:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.263 00:06:36.263 real 0m20.877s 00:06:36.263 user 0m42.877s 00:06:36.263 sys 0m3.455s 00:06:36.263 07:33:27 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.263 07:33:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.264 ************************************ 00:06:36.264 END TEST app_repeat 00:06:36.264 ************************************ 00:06:36.264 07:33:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.264 07:33:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.264 07:33:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.264 07:33:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.264 07:33:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.264 07:33:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.264 ************************************ 00:06:36.264 START TEST cpu_locks 00:06:36.264 ************************************ 00:06:36.264 07:33:27 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.264 * Looking for test storage... 00:06:36.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.264 07:33:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.264 07:33:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.264 07:33:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.264 07:33:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.264 07:33:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.264 07:33:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.264 07:33:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.264 ************************************ 00:06:36.264 START TEST default_locks 00:06:36.264 ************************************ 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=948117 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 948117 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 948117 ']' 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.264 07:33:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.523 [2024-07-15 07:33:27.566239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:36.523 [2024-07-15 07:33:27.566392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948117 ] 00:06:36.523 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.523 [2024-07-15 07:33:27.691568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.783 [2024-07-15 07:33:27.952491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.724 07:33:28 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.724 07:33:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:37.724 07:33:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 948117 00:06:37.724 07:33:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 948117 00:06:37.724 07:33:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.983 lslocks: write error 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 948117 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 948117 ']' 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 948117 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948117 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948117' 00:06:37.983 killing process with pid 948117 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 948117 00:06:37.983 07:33:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 948117 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 948117 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 948117 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 948117 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 948117 ']' 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (948117) - No such process 00:06:40.524 ERROR: process (pid: 948117) is no longer running 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.524 00:06:40.524 real 0m4.261s 00:06:40.524 user 0m4.270s 00:06:40.524 sys 0m0.719s 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.524 07:33:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.524 ************************************ 00:06:40.524 END TEST default_locks 00:06:40.524 ************************************ 00:06:40.783 07:33:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.783 07:33:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.783 07:33:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.783 07:33:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.783 07:33:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.783 ************************************ 00:06:40.783 START TEST default_locks_via_rpc 00:06:40.783 ************************************ 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=948562 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 948562 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 948562 ']' 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.783 07:33:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.783 [2024-07-15 07:33:31.883914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.783 [2024-07-15 07:33:31.884072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948562 ] 00:06:40.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.783 [2024-07-15 07:33:32.011121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.044 [2024-07-15 07:33:32.263758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 948562 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 948562 00:06:41.985 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.246 07:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 948562 00:06:42.246 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 948562 ']' 00:06:42.246 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 948562 00:06:42.246 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:42.246 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.246 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948562 00:06:42.506 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.506 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.506 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948562' 00:06:42.506 killing process with pid 948562 00:06:42.506 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 948562 00:06:42.506 07:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 948562 00:06:45.040 00:06:45.040 real 0m4.235s 00:06:45.040 user 0m4.272s 00:06:45.040 sys 0m0.728s 00:06:45.040 07:33:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.040 07:33:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.040 ************************************ 00:06:45.040 END TEST default_locks_via_rpc 00:06:45.040 ************************************ 00:06:45.040 07:33:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.040 07:33:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.040 07:33:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.040 07:33:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.040 07:33:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.040 ************************************ 00:06:45.040 START TEST non_locking_app_on_locked_coremask 00:06:45.040 ************************************ 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=949115 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 949115 /var/tmp/spdk.sock 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 949115 ']' 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.040 07:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.040 [2024-07-15 07:33:36.179425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.040 [2024-07-15 07:33:36.179598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949115 ] 00:06:45.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.300 [2024-07-15 07:33:36.321761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.558 [2024-07-15 07:33:36.581015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=949292 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 949292 /var/tmp/spdk2.sock 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 949292 ']' 00:06:46.492 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.493 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.493 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.493 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.493 07:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.493 [2024-07-15 07:33:37.580683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.493 [2024-07-15 07:33:37.580831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949292 ] 00:06:46.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.749 [2024-07-15 07:33:37.768890] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.749 [2024-07-15 07:33:37.768963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.317 [2024-07-15 07:33:38.291125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.222 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.222 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.222 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 949115 00:06:49.222 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 949115 00:06:49.222 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.481 lslocks: write error 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 949115 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 949115 ']' 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 949115 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949115 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949115' 00:06:49.481 killing process with pid 949115 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 949115 00:06:49.481 07:33:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 949115 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 949292 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 949292 ']' 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 949292 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949292 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949292' 00:06:54.827 killing process with pid 949292 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 949292 00:06:54.827 07:33:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 949292 00:06:57.368 00:06:57.368 real 0m12.176s 00:06:57.368 user 0m12.547s 00:06:57.368 sys 0m1.480s 00:06:57.368 07:33:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.368 07:33:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.368 ************************************ 00:06:57.368 END TEST non_locking_app_on_locked_coremask 00:06:57.368 ************************************ 00:06:57.368 07:33:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.368 07:33:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.368 07:33:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.368 07:33:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.368 07:33:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.368 ************************************ 00:06:57.368 START TEST locking_app_on_unlocked_coremask 00:06:57.368 ************************************ 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=950614 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 950614 /var/tmp/spdk.sock 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 950614 ']' 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.368 07:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.368 [2024-07-15 07:33:48.396229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.368 [2024-07-15 07:33:48.396397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950614 ] 00:06:57.368 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.368 [2024-07-15 07:33:48.522235] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.368 [2024-07-15 07:33:48.522297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.627 [2024-07-15 07:33:48.777115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=950762 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 950762 /var/tmp/spdk2.sock 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 950762 ']' 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.565 07:33:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.565 [2024-07-15 07:33:49.770501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.565 [2024-07-15 07:33:49.770659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950762 ] 00:06:58.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.823 [2024-07-15 07:33:49.962369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.391 [2024-07-15 07:33:50.484315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.296 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.296 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.296 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 950762 00:07:01.296 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 950762 00:07:01.296 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.862 lslocks: write error 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 950614 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 950614 ']' 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 950614 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950614 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950614' 00:07:01.862 killing process with pid 950614 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 950614 00:07:01.862 07:33:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 950614 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 950762 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 950762 ']' 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 950762 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950762 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950762' 00:07:07.138 killing process with pid 950762 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 950762 00:07:07.138 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 950762 00:07:09.675 00:07:09.675 real 0m12.309s 00:07:09.675 user 0m12.717s 00:07:09.675 sys 0m1.484s 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.675 ************************************ 00:07:09.675 END TEST locking_app_on_unlocked_coremask 00:07:09.675 ************************************ 00:07:09.675 07:34:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.675 07:34:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.675 07:34:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.675 07:34:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.675 07:34:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.675 ************************************ 00:07:09.675 START TEST locking_app_on_locked_coremask 00:07:09.675 ************************************ 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=952108 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 952108 /var/tmp/spdk.sock 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 952108 ']' 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.675 07:34:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.675 [2024-07-15 07:34:00.759432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.675 [2024-07-15 07:34:00.759575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952108 ] 00:07:09.675 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.675 [2024-07-15 07:34:00.892653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.934 [2024-07-15 07:34:01.151238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=952257 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 952257 /var/tmp/spdk2.sock 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 952257 /var/tmp/spdk2.sock 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 952257 /var/tmp/spdk2.sock 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 952257 ']' 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.870 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.130 [2024-07-15 07:34:02.141802] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.130 [2024-07-15 07:34:02.141967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952257 ] 00:07:11.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.130 [2024-07-15 07:34:02.313947] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 952108 has claimed it. 00:07:11.130 [2024-07-15 07:34:02.314041] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (952257) - No such process 00:07:11.697 ERROR: process (pid: 952257) is no longer running 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 952108 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 952108 00:07:11.697 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.956 lslocks: write error 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 952108 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 952108 ']' 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 952108 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 952108 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 952108' 00:07:11.956 killing process with pid 952108 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 952108 00:07:11.956 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 952108 00:07:14.542 00:07:14.542 real 0m5.011s 00:07:14.542 user 0m5.256s 00:07:14.542 sys 0m0.947s 00:07:14.542 07:34:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.542 07:34:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.542 ************************************ 00:07:14.542 END TEST locking_app_on_locked_coremask 00:07:14.542 ************************************ 00:07:14.542 07:34:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.542 07:34:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:14.542 07:34:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.542 07:34:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.542 07:34:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.542 ************************************ 00:07:14.542 START TEST locking_overlapped_coremask 00:07:14.542 ************************************ 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=952704 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 952704 /var/tmp/spdk.sock 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 952704 ']' 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.542 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.801 [2024-07-15 07:34:05.825020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.801 [2024-07-15 07:34:05.825159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952704 ] 00:07:14.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.801 [2024-07-15 07:34:05.953822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.061 [2024-07-15 07:34:06.210746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.061 [2024-07-15 07:34:06.210793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.061 [2024-07-15 07:34:06.210802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=952961 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 952961 /var/tmp/spdk2.sock 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 952961 /var/tmp/spdk2.sock 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 952961 /var/tmp/spdk2.sock 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 952961 ']' 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.997 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 [2024-07-15 07:34:07.206736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:15.997 [2024-07-15 07:34:07.206922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952961 ] 00:07:16.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.256 [2024-07-15 07:34:07.397445] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 952704 has claimed it. 00:07:16.256 [2024-07-15 07:34:07.397542] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (952961) - No such process 00:07:16.827 ERROR: process (pid: 952961) is no longer running 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 952704 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 952704 ']' 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 952704 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 952704 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 952704' 00:07:16.827 killing process with pid 952704 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 952704 00:07:16.827 07:34:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 952704 00:07:19.360 00:07:19.360 real 0m4.709s 00:07:19.360 user 0m12.316s 00:07:19.360 sys 0m0.755s 00:07:19.360 07:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.360 07:34:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.360 ************************************ 00:07:19.360 END TEST locking_overlapped_coremask 00:07:19.360 ************************************ 00:07:19.360 07:34:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:19.360 07:34:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.361 07:34:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.361 07:34:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.361 07:34:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.361 ************************************ 00:07:19.361 START TEST locking_overlapped_coremask_via_rpc 00:07:19.361 ************************************ 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=953389 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 953389 /var/tmp/spdk.sock 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 953389 ']' 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.361 07:34:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.361 [2024-07-15 07:34:10.581512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.361 [2024-07-15 07:34:10.581662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953389 ] 00:07:19.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.620 [2024-07-15 07:34:10.715074] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.620 [2024-07-15 07:34:10.715135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.879 [2024-07-15 07:34:10.979099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.879 [2024-07-15 07:34:10.979151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.879 [2024-07-15 07:34:10.979157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=953542 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 953542 /var/tmp/spdk2.sock 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 953542 ']' 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.814 07:34:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.814 [2024-07-15 07:34:11.979497] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:20.814 [2024-07-15 07:34:11.979641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953542 ] 00:07:21.072 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.072 [2024-07-15 07:34:12.152575] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.072 [2024-07-15 07:34:12.152634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.640 [2024-07-15 07:34:12.614962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.640 [2024-07-15 07:34:12.617950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.640 [2024-07-15 07:34:12.617956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.539 [2024-07-15 07:34:14.713057] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 953389 has claimed it. 00:07:23.539 request: 00:07:23.539 { 00:07:23.539 "method": "framework_enable_cpumask_locks", 00:07:23.539 "req_id": 1 00:07:23.539 } 00:07:23.539 Got JSON-RPC error response 00:07:23.539 response: 00:07:23.539 { 00:07:23.539 "code": -32603, 00:07:23.539 "message": "Failed to claim CPU core: 2" 00:07:23.539 } 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 953389 /var/tmp/spdk.sock 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 953389 ']' 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.539 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 953542 /var/tmp/spdk2.sock 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 953542 ']' 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.796 07:34:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.053 00:07:24.053 real 0m4.744s 00:07:24.053 user 0m1.565s 00:07:24.053 sys 0m0.262s 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.053 07:34:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.053 ************************************ 00:07:24.053 END TEST locking_overlapped_coremask_via_rpc 00:07:24.053 ************************************ 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:24.053 07:34:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:24.053 07:34:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 953389 ]] 00:07:24.053 07:34:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 953389 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 953389 ']' 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 953389 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953389 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953389' 00:07:24.053 killing process with pid 953389 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 953389 00:07:24.053 07:34:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 953389 00:07:26.586 07:34:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 953542 ]] 00:07:26.586 07:34:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 953542 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 953542 ']' 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 953542 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953542 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953542' 00:07:26.586 killing process with pid 953542 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 953542 00:07:26.586 07:34:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 953542 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 953389 ]] 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 953389 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 953389 ']' 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 953389 00:07:29.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (953389) - No such process 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 953389 is not found' 00:07:29.135 Process with pid 953389 is not found 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 953542 ]] 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 953542 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 953542 ']' 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 953542 00:07:29.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (953542) - No such process 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 953542 is not found' 00:07:29.135 Process with pid 953542 is not found 00:07:29.135 07:34:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:29.135 00:07:29.135 real 0m52.404s 00:07:29.135 user 1m27.694s 00:07:29.135 sys 0m7.633s 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.135 07:34:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 ************************************ 00:07:29.135 END TEST cpu_locks 00:07:29.135 ************************************ 00:07:29.135 07:34:19 event -- common/autotest_common.sh@1142 -- # return 0 00:07:29.135 00:07:29.135 real 1m22.919s 00:07:29.135 user 2m25.819s 00:07:29.135 sys 0m12.317s 00:07:29.135 07:34:19 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.135 07:34:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 ************************************ 00:07:29.135 END TEST event 00:07:29.135 ************************************ 00:07:29.135 07:34:19 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.135 07:34:19 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:29.135 07:34:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.135 07:34:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.135 07:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 ************************************ 00:07:29.135 START TEST thread 00:07:29.135 ************************************ 00:07:29.135 07:34:19 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:29.135 * Looking for test storage... 00:07:29.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:29.135 07:34:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:29.135 07:34:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.135 07:34:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.135 07:34:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 ************************************ 00:07:29.135 START TEST thread_poller_perf 00:07:29.135 ************************************ 00:07:29.135 07:34:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:29.135 [2024-07-15 07:34:19.981967] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.135 [2024-07-15 07:34:19.982104] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954574 ] 00:07:29.135 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.135 [2024-07-15 07:34:20.123101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.394 [2024-07-15 07:34:20.379419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.394 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:30.772 ====================================== 00:07:30.772 busy:2714723750 (cyc) 00:07:30.772 total_run_count: 282000 00:07:30.772 tsc_hz: 2700000000 (cyc) 00:07:30.772 ====================================== 00:07:30.772 poller_cost: 9626 (cyc), 3565 (nsec) 00:07:30.772 00:07:30.772 real 0m1.892s 00:07:30.772 user 0m1.700s 00:07:30.772 sys 0m0.181s 00:07:30.772 07:34:21 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.772 07:34:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.772 ************************************ 00:07:30.772 END TEST thread_poller_perf 00:07:30.772 ************************************ 00:07:30.772 07:34:21 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:30.772 07:34:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:30.772 07:34:21 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:30.772 07:34:21 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.772 07:34:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.772 ************************************ 00:07:30.772 START TEST thread_poller_perf 00:07:30.772 ************************************ 00:07:30.772 07:34:21 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:30.772 [2024-07-15 07:34:21.926519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.772 [2024-07-15 07:34:21.926656] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954848 ] 00:07:31.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.032 [2024-07-15 07:34:22.074647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.331 [2024-07-15 07:34:22.330423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.331 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:32.709 ====================================== 00:07:32.709 busy:2705012023 (cyc) 00:07:32.709 total_run_count: 3803000 00:07:32.709 tsc_hz: 2700000000 (cyc) 00:07:32.709 ====================================== 00:07:32.709 poller_cost: 711 (cyc), 263 (nsec) 00:07:32.709 00:07:32.709 real 0m1.891s 00:07:32.709 user 0m1.703s 00:07:32.709 sys 0m0.177s 00:07:32.709 07:34:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.709 07:34:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.709 ************************************ 00:07:32.709 END TEST thread_poller_perf 00:07:32.709 ************************************ 00:07:32.709 07:34:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:32.709 07:34:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:32.709 00:07:32.709 real 0m3.933s 00:07:32.709 user 0m3.473s 00:07:32.709 sys 0m0.449s 00:07:32.709 07:34:23 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.709 07:34:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.709 ************************************ 00:07:32.709 END TEST thread 00:07:32.709 ************************************ 00:07:32.709 07:34:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.709 07:34:23 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:32.709 07:34:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.709 07:34:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.709 07:34:23 -- common/autotest_common.sh@10 -- # set +x 00:07:32.709 ************************************ 00:07:32.709 START TEST accel 00:07:32.710 ************************************ 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:32.710 * Looking for test storage... 00:07:32.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:32.710 07:34:23 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:32.710 07:34:23 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:32.710 07:34:23 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.710 07:34:23 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=955060 00:07:32.710 07:34:23 accel -- accel/accel.sh@63 -- # waitforlisten 955060 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@829 -- # '[' -z 955060 ']' 00:07:32.710 07:34:23 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.710 07:34:23 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.710 07:34:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.710 07:34:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.710 07:34:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.710 07:34:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.710 07:34:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.710 07:34:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.710 07:34:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:32.710 07:34:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:32.968 [2024-07-15 07:34:24.005157] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:32.968 [2024-07-15 07:34:24.005334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955060 ] 00:07:32.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.968 [2024-07-15 07:34:24.142578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.228 [2024-07-15 07:34:24.397426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.167 07:34:25 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.167 07:34:25 accel -- common/autotest_common.sh@862 -- # return 0 00:07:34.167 07:34:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:34.167 07:34:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:34.167 07:34:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:34.167 07:34:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:34.167 07:34:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:34.167 07:34:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:34.167 07:34:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.167 07:34:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.167 07:34:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:34.167 07:34:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.167 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.167 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.167 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # IFS== 00:07:34.168 07:34:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:34.168 07:34:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:34.168 07:34:25 accel -- accel/accel.sh@75 -- # killprocess 955060 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@948 -- # '[' -z 955060 ']' 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@952 -- # kill -0 955060 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@953 -- # uname 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 955060 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 955060' 00:07:34.168 killing process with pid 955060 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@967 -- # kill 955060 00:07:34.168 07:34:25 accel -- common/autotest_common.sh@972 -- # wait 955060 00:07:36.706 07:34:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:36.706 07:34:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 07:34:27 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:36.706 07:34:27 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:36.706 07:34:27 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.706 07:34:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.706 07:34:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.706 07:34:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.966 ************************************ 00:07:36.966 START TEST accel_missing_filename 00:07:36.966 ************************************ 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.966 07:34:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:36.966 07:34:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:36.966 [2024-07-15 07:34:27.985315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.966 [2024-07-15 07:34:27.985453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955617 ] 00:07:36.966 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.966 [2024-07-15 07:34:28.114472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.226 [2024-07-15 07:34:28.367218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.486 [2024-07-15 07:34:28.580753] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.056 [2024-07-15 07:34:29.139554] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:38.624 A filename is required. 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.624 00:07:38.624 real 0m1.651s 00:07:38.624 user 0m1.438s 00:07:38.624 sys 0m0.240s 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.624 07:34:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:38.624 ************************************ 00:07:38.624 END TEST accel_missing_filename 00:07:38.624 ************************************ 00:07:38.624 07:34:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.624 07:34:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.624 07:34:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:38.624 07:34:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.625 07:34:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.625 ************************************ 00:07:38.625 START TEST accel_compress_verify 00:07:38.625 ************************************ 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.625 07:34:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:38.625 07:34:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:38.625 [2024-07-15 07:34:29.676719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.625 [2024-07-15 07:34:29.676852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955830 ] 00:07:38.625 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.625 [2024-07-15 07:34:29.809060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.885 [2024-07-15 07:34:30.078461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.145 [2024-07-15 07:34:30.314440] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.715 [2024-07-15 07:34:30.876640] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:40.284 00:07:40.284 Compression does not support the verify option, aborting. 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.284 00:07:40.284 real 0m1.700s 00:07:40.284 user 0m1.472s 00:07:40.284 sys 0m0.258s 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.284 07:34:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:40.284 ************************************ 00:07:40.284 END TEST accel_compress_verify 00:07:40.284 ************************************ 00:07:40.284 07:34:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.284 07:34:31 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:40.284 07:34:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:40.284 07:34:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.284 07:34:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.284 ************************************ 00:07:40.284 START TEST accel_wrong_workload 00:07:40.284 ************************************ 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.284 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:40.284 07:34:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:40.284 Unsupported workload type: foobar 00:07:40.284 [2024-07-15 07:34:31.420765] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:40.284 accel_perf options: 00:07:40.284 [-h help message] 00:07:40.284 [-q queue depth per core] 00:07:40.284 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:40.284 [-T number of threads per core 00:07:40.284 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:40.284 [-t time in seconds] 00:07:40.284 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:40.284 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:40.284 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:40.284 [-l for compress/decompress workloads, name of uncompressed input file 00:07:40.284 [-S for crc32c workload, use this seed value (default 0) 00:07:40.284 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:40.285 [-f for fill workload, use this BYTE value (default 255) 00:07:40.285 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:40.285 [-y verify result if this switch is on] 00:07:40.285 [-a tasks to allocate per core (default: same value as -q)] 00:07:40.285 Can be used to spread operations across a wider range of memory. 00:07:40.285 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:40.285 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.285 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:40.285 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.285 00:07:40.285 real 0m0.057s 00:07:40.285 user 0m0.060s 00:07:40.285 sys 0m0.035s 00:07:40.285 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.285 07:34:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:40.285 ************************************ 00:07:40.285 END TEST accel_wrong_workload 00:07:40.285 ************************************ 00:07:40.285 07:34:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.285 07:34:31 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:40.285 07:34:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:40.285 07:34:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.285 07:34:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.285 ************************************ 00:07:40.285 START TEST accel_negative_buffers 00:07:40.285 ************************************ 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.285 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:40.285 07:34:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:40.545 -x option must be non-negative. 00:07:40.545 [2024-07-15 07:34:31.518184] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:40.545 accel_perf options: 00:07:40.545 [-h help message] 00:07:40.545 [-q queue depth per core] 00:07:40.545 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:40.545 [-T number of threads per core 00:07:40.545 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:40.545 [-t time in seconds] 00:07:40.545 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:40.545 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:40.545 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:40.545 [-l for compress/decompress workloads, name of uncompressed input file 00:07:40.545 [-S for crc32c workload, use this seed value (default 0) 00:07:40.545 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:40.545 [-f for fill workload, use this BYTE value (default 255) 00:07:40.545 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:40.545 [-y verify result if this switch is on] 00:07:40.545 [-a tasks to allocate per core (default: same value as -q)] 00:07:40.545 Can be used to spread operations across a wider range of memory. 00:07:40.545 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:40.545 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.545 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:40.545 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.545 00:07:40.545 real 0m0.057s 00:07:40.545 user 0m0.057s 00:07:40.545 sys 0m0.037s 00:07:40.545 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.545 07:34:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:40.545 ************************************ 00:07:40.545 END TEST accel_negative_buffers 00:07:40.545 ************************************ 00:07:40.545 07:34:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.545 07:34:31 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:40.545 07:34:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:40.545 07:34:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.545 07:34:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.545 ************************************ 00:07:40.545 START TEST accel_crc32c 00:07:40.546 ************************************ 00:07:40.546 07:34:31 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:40.546 07:34:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:40.546 [2024-07-15 07:34:31.614307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:40.546 [2024-07-15 07:34:31.614429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956104 ] 00:07:40.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.546 [2024-07-15 07:34:31.742785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.804 [2024-07-15 07:34:32.003383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.064 07:34:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:43.602 07:34:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.602 00:07:43.602 real 0m2.687s 00:07:43.603 user 0m0.010s 00:07:43.603 sys 0m0.002s 00:07:43.603 07:34:34 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.603 07:34:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 ************************************ 00:07:43.603 END TEST accel_crc32c 00:07:43.603 ************************************ 00:07:43.603 07:34:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.603 07:34:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:43.603 07:34:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:43.603 07:34:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.603 07:34:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 ************************************ 00:07:43.603 START TEST accel_crc32c_C2 00:07:43.603 ************************************ 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.603 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:43.603 [2024-07-15 07:34:34.341499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.603 [2024-07-15 07:34:34.341629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956519 ] 00:07:43.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.603 [2024-07-15 07:34:34.470239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.603 [2024-07-15 07:34:34.734251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.862 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.863 07:34:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.770 00:07:45.770 real 0m2.692s 00:07:45.770 user 0m0.010s 00:07:45.770 sys 0m0.003s 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.770 07:34:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:45.770 ************************************ 00:07:45.770 END TEST accel_crc32c_C2 00:07:45.770 ************************************ 00:07:46.031 07:34:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.031 07:34:37 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:46.031 07:34:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:46.031 07:34:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.031 07:34:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.031 ************************************ 00:07:46.031 START TEST accel_copy 00:07:46.031 ************************************ 00:07:46.031 07:34:37 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:46.031 07:34:37 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:46.031 [2024-07-15 07:34:37.081993] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:46.031 [2024-07-15 07:34:37.082121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956804 ] 00:07:46.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.031 [2024-07-15 07:34:37.215401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.290 [2024-07-15 07:34:37.477209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.550 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.551 07:34:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.507 07:34:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.507 07:34:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.507 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.507 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.507 07:34:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.507 07:34:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:48.508 07:34:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.508 00:07:48.508 real 0m2.694s 00:07:48.508 user 0m2.450s 00:07:48.508 sys 0m0.239s 00:07:48.508 07:34:39 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.508 07:34:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 ************************************ 00:07:48.508 END TEST accel_copy 00:07:48.508 ************************************ 00:07:48.767 07:34:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.767 07:34:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.767 07:34:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:48.767 07:34:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.767 07:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.767 ************************************ 00:07:48.767 START TEST accel_fill 00:07:48.767 ************************************ 00:07:48.767 07:34:39 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:48.767 07:34:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:48.767 [2024-07-15 07:34:39.824824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:48.767 [2024-07-15 07:34:39.824989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957130 ] 00:07:48.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.767 [2024-07-15 07:34:39.955572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.027 [2024-07-15 07:34:40.221166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.287 07:34:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:51.826 07:34:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.826 00:07:51.826 real 0m2.697s 00:07:51.826 user 0m0.012s 00:07:51.826 sys 0m0.001s 00:07:51.826 07:34:42 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.826 07:34:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:51.826 ************************************ 00:07:51.826 END TEST accel_fill 00:07:51.826 ************************************ 00:07:51.826 07:34:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.826 07:34:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:51.826 07:34:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:51.826 07:34:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.826 07:34:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.826 ************************************ 00:07:51.826 START TEST accel_copy_crc32c 00:07:51.826 ************************************ 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:51.826 07:34:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:51.826 [2024-07-15 07:34:42.575169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:51.826 [2024-07-15 07:34:42.575305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957508 ] 00:07:51.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.826 [2024-07-15 07:34:42.705294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.826 [2024-07-15 07:34:42.967340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 07:34:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.987 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.988 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.260 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.260 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:54.260 07:34:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.260 00:07:54.260 real 0m2.696s 00:07:54.260 user 0m2.437s 00:07:54.260 sys 0m0.255s 00:07:54.260 07:34:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.260 07:34:45 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:54.260 ************************************ 00:07:54.260 END TEST accel_copy_crc32c 00:07:54.260 ************************************ 00:07:54.260 07:34:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.260 07:34:45 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:54.260 07:34:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:54.260 07:34:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.260 07:34:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.260 ************************************ 00:07:54.260 START TEST accel_copy_crc32c_C2 00:07:54.260 ************************************ 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.260 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:54.260 [2024-07-15 07:34:45.314892] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.260 [2024-07-15 07:34:45.315027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957803 ] 00:07:54.260 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.260 [2024-07-15 07:34:45.443550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.517 [2024-07-15 07:34:45.706615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.776 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.777 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.777 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.777 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.777 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.777 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.777 07:34:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.309 00:07:57.309 real 0m2.699s 00:07:57.309 user 0m2.457s 00:07:57.309 sys 0m0.239s 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.309 07:34:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:57.309 ************************************ 00:07:57.309 END TEST accel_copy_crc32c_C2 00:07:57.309 ************************************ 00:07:57.309 07:34:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.309 07:34:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:57.309 07:34:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:57.309 07:34:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.309 07:34:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.309 ************************************ 00:07:57.309 START TEST accel_dualcast 00:07:57.309 ************************************ 00:07:57.309 07:34:48 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:57.309 07:34:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:57.309 [2024-07-15 07:34:48.059141] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.309 [2024-07-15 07:34:48.059291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958206 ] 00:07:57.309 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.309 [2024-07-15 07:34:48.191695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.309 [2024-07-15 07:34:48.453292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.570 07:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:59.480 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:59.740 07:34:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:59.740 07:34:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.740 07:34:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:59.740 07:34:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.740 00:07:59.740 real 0m2.700s 00:07:59.740 user 0m2.435s 00:07:59.740 sys 0m0.261s 00:07:59.740 07:34:50 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.740 07:34:50 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:59.740 ************************************ 00:07:59.740 END TEST accel_dualcast 00:07:59.740 ************************************ 00:07:59.740 07:34:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.740 07:34:50 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:59.740 07:34:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.740 07:34:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.740 07:34:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.740 ************************************ 00:07:59.740 START TEST accel_compare 00:07:59.740 ************************************ 00:07:59.740 07:34:50 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:59.740 07:34:50 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:59.740 [2024-07-15 07:34:50.803606] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:59.740 [2024-07-15 07:34:50.803732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958500 ] 00:07:59.740 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.740 [2024-07-15 07:34:50.931925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.999 [2024-07-15 07:34:51.194996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.258 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.259 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.259 07:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.259 07:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.259 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.259 07:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:02.796 07:34:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.796 00:08:02.796 real 0m2.692s 00:08:02.796 user 0m0.010s 00:08:02.796 sys 0m0.002s 00:08:02.796 07:34:53 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.796 07:34:53 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:02.796 ************************************ 00:08:02.796 END TEST accel_compare 00:08:02.796 ************************************ 00:08:02.796 07:34:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.796 07:34:53 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:02.796 07:34:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:02.796 07:34:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.796 07:34:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.796 ************************************ 00:08:02.796 START TEST accel_xor 00:08:02.796 ************************************ 00:08:02.796 07:34:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:02.796 07:34:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:02.796 [2024-07-15 07:34:53.541685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:02.796 [2024-07-15 07:34:53.541810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958883 ] 00:08:02.796 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.796 [2024-07-15 07:34:53.672261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.796 [2024-07-15 07:34:53.932334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.055 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.056 07:34:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:04.963 07:34:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.963 00:08:04.963 real 0m2.687s 00:08:04.963 user 0m0.011s 00:08:04.963 sys 0m0.001s 00:08:04.963 07:34:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.963 07:34:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:04.963 ************************************ 00:08:04.963 END TEST accel_xor 00:08:04.963 ************************************ 00:08:05.222 07:34:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.222 07:34:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:05.222 07:34:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:05.222 07:34:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.222 07:34:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 ************************************ 00:08:05.222 START TEST accel_xor 00:08:05.222 ************************************ 00:08:05.222 07:34:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:05.222 07:34:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:05.222 [2024-07-15 07:34:56.273576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.222 [2024-07-15 07:34:56.273697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959196 ] 00:08:05.222 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.222 [2024-07-15 07:34:56.399384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.505 [2024-07-15 07:34:56.659521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.781 07:34:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.688 07:34:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.688 07:34:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.688 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.688 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.688 07:34:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:07.689 07:34:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.689 00:08:07.689 real 0m2.682s 00:08:07.689 user 0m2.446s 00:08:07.689 sys 0m0.232s 00:08:07.689 07:34:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.689 07:34:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 ************************************ 00:08:07.689 END TEST accel_xor 00:08:07.689 ************************************ 00:08:07.947 07:34:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.947 07:34:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:07.947 07:34:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:07.947 07:34:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.947 07:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 ************************************ 00:08:07.947 START TEST accel_dif_verify 00:08:07.947 ************************************ 00:08:07.947 07:34:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:07.947 07:34:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:07.947 [2024-07-15 07:34:59.002748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.947 [2024-07-15 07:34:59.002906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959494 ] 00:08:07.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.947 [2024-07-15 07:34:59.149548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.207 [2024-07-15 07:34:59.410779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.466 07:34:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:11.002 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:11.003 07:35:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:11.003 07:35:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.003 07:35:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:11.003 07:35:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.003 00:08:11.003 real 0m2.708s 00:08:11.003 user 0m2.453s 00:08:11.003 sys 0m0.251s 00:08:11.003 07:35:01 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.003 07:35:01 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:11.003 ************************************ 00:08:11.003 END TEST accel_dif_verify 00:08:11.003 ************************************ 00:08:11.003 07:35:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.003 07:35:01 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:11.003 07:35:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:11.003 07:35:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.003 07:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.003 ************************************ 00:08:11.003 START TEST accel_dif_generate 00:08:11.003 ************************************ 00:08:11.003 07:35:01 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:11.003 07:35:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:11.003 [2024-07-15 07:35:01.751421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.003 [2024-07-15 07:35:01.751541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959901 ] 00:08:11.003 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.003 [2024-07-15 07:35:01.885800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.003 [2024-07-15 07:35:02.147725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.262 07:35:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.169 07:35:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.429 07:35:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.429 07:35:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:13.429 07:35:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.429 00:08:13.429 real 0m2.695s 00:08:13.429 user 0m2.456s 00:08:13.429 sys 0m0.236s 00:08:13.429 07:35:04 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.429 07:35:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:13.429 ************************************ 00:08:13.429 END TEST accel_dif_generate 00:08:13.429 ************************************ 00:08:13.429 07:35:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.429 07:35:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:13.429 07:35:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:13.429 07:35:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.429 07:35:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.429 ************************************ 00:08:13.429 START TEST accel_dif_generate_copy 00:08:13.429 ************************************ 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:13.429 07:35:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:13.429 [2024-07-15 07:35:04.490694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:13.429 [2024-07-15 07:35:04.490813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960197 ] 00:08:13.429 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.429 [2024-07-15 07:35:04.620731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.688 [2024-07-15 07:35:04.884265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.948 07:35:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.483 00:08:16.483 real 0m2.690s 00:08:16.483 user 0m0.010s 00:08:16.483 sys 0m0.003s 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.483 07:35:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.483 ************************************ 00:08:16.483 END TEST accel_dif_generate_copy 00:08:16.483 ************************************ 00:08:16.483 07:35:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.483 07:35:07 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:16.483 07:35:07 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.483 07:35:07 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:16.483 07:35:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.483 07:35:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.483 ************************************ 00:08:16.484 START TEST accel_comp 00:08:16.484 ************************************ 00:08:16.484 07:35:07 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:16.484 07:35:07 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:16.484 [2024-07-15 07:35:07.227223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:16.484 [2024-07-15 07:35:07.227349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960596 ] 00:08:16.484 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.484 [2024-07-15 07:35:07.357515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.484 [2024-07-15 07:35:07.617569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.744 07:35:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.650 07:35:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.911 07:35:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.911 07:35:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:18.911 07:35:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.911 00:08:18.911 real 0m2.698s 00:08:18.911 user 0m2.452s 00:08:18.911 sys 0m0.243s 00:08:18.911 07:35:09 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.911 07:35:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:18.911 ************************************ 00:08:18.911 END TEST accel_comp 00:08:18.911 ************************************ 00:08:18.911 07:35:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.911 07:35:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:18.911 07:35:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:18.911 07:35:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.911 07:35:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.911 ************************************ 00:08:18.911 START TEST accel_decomp 00:08:18.911 ************************************ 00:08:18.911 07:35:09 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:18.911 07:35:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:18.911 [2024-07-15 07:35:09.961658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:18.911 [2024-07-15 07:35:09.961788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960895 ] 00:08:18.911 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.911 [2024-07-15 07:35:10.094629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.171 [2024-07-15 07:35:10.353375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.431 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:19.432 07:35:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.969 07:35:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.969 00:08:21.969 real 0m2.687s 00:08:21.969 user 0m2.451s 00:08:21.969 sys 0m0.232s 00:08:21.969 07:35:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.970 07:35:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 ************************************ 00:08:21.970 END TEST accel_decomp 00:08:21.970 ************************************ 00:08:21.970 07:35:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.970 07:35:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.970 07:35:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:21.970 07:35:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.970 07:35:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 ************************************ 00:08:21.970 START TEST accel_decomp_full 00:08:21.970 ************************************ 00:08:21.970 07:35:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:21.970 07:35:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:21.970 [2024-07-15 07:35:12.703479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:21.970 [2024-07-15 07:35:12.703593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961188 ] 00:08:21.970 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.970 [2024-07-15 07:35:12.840376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.970 [2024-07-15 07:35:13.102556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.229 07:35:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:24.154 07:35:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:24.412 07:35:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.412 07:35:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.412 07:35:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.412 00:08:24.412 real 0m2.727s 00:08:24.412 user 0m0.011s 00:08:24.412 sys 0m0.003s 00:08:24.412 07:35:15 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.412 07:35:15 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:24.412 ************************************ 00:08:24.412 END TEST accel_decomp_full 00:08:24.412 ************************************ 00:08:24.412 07:35:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.412 07:35:15 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.412 07:35:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:24.412 07:35:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.412 07:35:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.412 ************************************ 00:08:24.412 START TEST accel_decomp_mcore 00:08:24.412 ************************************ 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:24.412 07:35:15 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:24.412 [2024-07-15 07:35:15.472366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.412 [2024-07-15 07:35:15.472490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961589 ] 00:08:24.412 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.412 [2024-07-15 07:35:15.603789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.672 [2024-07-15 07:35:15.871701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.672 [2024-07-15 07:35:15.871754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.672 [2024-07-15 07:35:15.871796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.672 [2024-07-15 07:35:15.871809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.930 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.930 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.930 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.930 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.930 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.930 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.931 07:35:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.468 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.469 00:08:27.469 real 0m2.726s 00:08:27.469 user 0m0.014s 00:08:27.469 sys 0m0.001s 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.469 07:35:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:27.469 ************************************ 00:08:27.469 END TEST accel_decomp_mcore 00:08:27.469 ************************************ 00:08:27.469 07:35:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.469 07:35:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:27.469 07:35:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:27.469 07:35:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.469 07:35:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.469 ************************************ 00:08:27.469 START TEST accel_decomp_full_mcore 00:08:27.469 ************************************ 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:27.469 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:27.469 [2024-07-15 07:35:18.243670] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:27.469 [2024-07-15 07:35:18.243793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961888 ] 00:08:27.469 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.469 [2024-07-15 07:35:18.374382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.469 [2024-07-15 07:35:18.640100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.469 [2024-07-15 07:35:18.640153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.469 [2024-07-15 07:35:18.640197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.469 [2024-07-15 07:35:18.640206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.729 07:35:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.268 00:08:30.268 real 0m2.766s 00:08:30.268 user 0m0.012s 00:08:30.268 sys 0m0.003s 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.268 07:35:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 ************************************ 00:08:30.268 END TEST accel_decomp_full_mcore 00:08:30.268 ************************************ 00:08:30.268 07:35:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.268 07:35:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:30.268 07:35:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:30.268 07:35:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.268 07:35:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 ************************************ 00:08:30.268 START TEST accel_decomp_mthread 00:08:30.268 ************************************ 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:30.268 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:30.268 [2024-07-15 07:35:21.063219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:30.268 [2024-07-15 07:35:21.063335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962301 ] 00:08:30.268 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.268 [2024-07-15 07:35:21.191613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.268 [2024-07-15 07:35:21.451889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.529 07:35:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.066 00:08:33.066 real 0m2.704s 00:08:33.066 user 0m2.459s 00:08:33.066 sys 0m0.243s 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.066 07:35:23 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:33.066 ************************************ 00:08:33.066 END TEST accel_decomp_mthread 00:08:33.066 ************************************ 00:08:33.066 07:35:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.066 07:35:23 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:33.066 07:35:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:33.066 07:35:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.066 07:35:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.066 ************************************ 00:08:33.066 START TEST accel_decomp_full_mthread 00:08:33.066 ************************************ 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:33.066 07:35:23 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:33.066 [2024-07-15 07:35:23.818129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:33.066 [2024-07-15 07:35:23.818283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962598 ] 00:08:33.066 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.066 [2024-07-15 07:35:23.950569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.066 [2024-07-15 07:35:24.211857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.325 07:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.891 00:08:35.891 real 0m2.746s 00:08:35.891 user 0m2.500s 00:08:35.891 sys 0m0.243s 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.891 07:35:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:35.891 ************************************ 00:08:35.891 END TEST accel_decomp_full_mthread 00:08:35.891 ************************************ 00:08:35.891 07:35:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.891 07:35:26 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:35.891 07:35:26 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:35.891 07:35:26 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:35.891 07:35:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.891 07:35:26 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:35.891 07:35:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.891 07:35:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.891 07:35:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.891 07:35:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.891 07:35:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.891 07:35:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.891 07:35:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:35.891 07:35:26 accel -- accel/accel.sh@41 -- # jq -r . 00:08:35.891 ************************************ 00:08:35.891 START TEST accel_dif_functional_tests 00:08:35.891 ************************************ 00:08:35.891 07:35:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:35.891 [2024-07-15 07:35:26.642760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:35.891 [2024-07-15 07:35:26.642920] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962999 ] 00:08:35.891 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.891 [2024-07-15 07:35:26.773734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:35.891 [2024-07-15 07:35:27.040758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.891 [2024-07-15 07:35:27.040803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.891 [2024-07-15 07:35:27.040813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.149 00:08:36.149 00:08:36.149 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.149 http://cunit.sourceforge.net/ 00:08:36.149 00:08:36.149 00:08:36.149 Suite: accel_dif 00:08:36.149 Test: verify: DIF generated, GUARD check ...passed 00:08:36.150 Test: verify: DIF generated, APPTAG check ...passed 00:08:36.150 Test: verify: DIF generated, REFTAG check ...passed 00:08:36.150 Test: verify: DIF not generated, GUARD check ...[2024-07-15 07:35:27.375623] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:36.150 passed 00:08:36.150 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 07:35:27.375717] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:36.150 passed 00:08:36.150 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 07:35:27.375793] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:36.150 passed 00:08:36.150 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:36.150 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 07:35:27.375931] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:36.150 passed 00:08:36.150 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:36.150 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:36.150 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:36.150 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 07:35:27.376180] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:36.150 passed 00:08:36.150 Test: verify copy: DIF generated, GUARD check ...passed 00:08:36.150 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:36.150 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:36.150 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 07:35:27.376486] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:36.150 passed 00:08:36.150 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 07:35:27.376563] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:36.150 passed 00:08:36.150 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 07:35:27.376638] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:36.150 passed 00:08:36.150 Test: generate copy: DIF generated, GUARD check ...passed 00:08:36.150 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:36.150 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:36.150 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:36.150 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:36.150 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:36.150 Test: generate copy: iovecs-len validate ...[2024-07-15 07:35:27.377124] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:36.150 passed 00:08:36.150 Test: generate copy: buffer alignment validate ...passed 00:08:36.150 00:08:36.150 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.150 suites 1 1 n/a 0 0 00:08:36.150 tests 26 26 26 0 0 00:08:36.150 asserts 115 115 115 0 n/a 00:08:36.150 00:08:36.150 Elapsed time = 0.005 seconds 00:08:37.527 00:08:37.527 real 0m2.019s 00:08:37.527 user 0m3.857s 00:08:37.527 sys 0m0.301s 00:08:37.527 07:35:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.527 07:35:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:37.527 ************************************ 00:08:37.527 END TEST accel_dif_functional_tests 00:08:37.527 ************************************ 00:08:37.527 07:35:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:37.527 00:08:37.527 real 1m4.759s 00:08:37.527 user 1m11.488s 00:08:37.527 sys 0m7.118s 00:08:37.527 07:35:28 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.527 07:35:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.527 ************************************ 00:08:37.527 END TEST accel 00:08:37.527 ************************************ 00:08:37.527 07:35:28 -- common/autotest_common.sh@1142 -- # return 0 00:08:37.527 07:35:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:37.527 07:35:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.527 07:35:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.527 07:35:28 -- common/autotest_common.sh@10 -- # set +x 00:08:37.527 ************************************ 00:08:37.527 START TEST accel_rpc 00:08:37.527 ************************************ 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:37.527 * Looking for test storage... 00:08:37.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:37.527 07:35:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:37.527 07:35:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=963274 00:08:37.527 07:35:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:37.527 07:35:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 963274 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 963274 ']' 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.527 07:35:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.786 [2024-07-15 07:35:28.791960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:37.786 [2024-07-15 07:35:28.792109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid963274 ] 00:08:37.786 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.786 [2024-07-15 07:35:28.919173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.046 [2024-07-15 07:35:29.177383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.613 07:35:29 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.613 07:35:29 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:38.613 07:35:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:38.613 07:35:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:38.613 07:35:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:38.613 07:35:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:38.613 07:35:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:38.613 07:35:29 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.613 07:35:29 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.613 07:35:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.613 ************************************ 00:08:38.613 START TEST accel_assign_opcode 00:08:38.613 ************************************ 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.613 [2024-07-15 07:35:29.719678] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.613 [2024-07-15 07:35:29.727659] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.613 07:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.549 software 00:08:39.549 00:08:39.549 real 0m0.936s 00:08:39.549 user 0m0.038s 00:08:39.549 sys 0m0.005s 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.549 07:35:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.549 ************************************ 00:08:39.549 END TEST accel_assign_opcode 00:08:39.549 ************************************ 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:39.549 07:35:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 963274 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 963274 ']' 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 963274 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 963274 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 963274' 00:08:39.549 killing process with pid 963274 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 963274 00:08:39.549 07:35:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 963274 00:08:42.118 00:08:42.118 real 0m4.583s 00:08:42.118 user 0m4.526s 00:08:42.118 sys 0m0.633s 00:08:42.118 07:35:33 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.118 07:35:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.118 ************************************ 00:08:42.118 END TEST accel_rpc 00:08:42.118 ************************************ 00:08:42.118 07:35:33 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.118 07:35:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:42.118 07:35:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.118 07:35:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.118 07:35:33 -- common/autotest_common.sh@10 -- # set +x 00:08:42.118 ************************************ 00:08:42.118 START TEST app_cmdline 00:08:42.118 ************************************ 00:08:42.118 07:35:33 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:42.379 * Looking for test storage... 00:08:42.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:42.379 07:35:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:42.379 07:35:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=963930 00:08:42.379 07:35:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:42.379 07:35:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 963930 00:08:42.379 07:35:33 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 963930 ']' 00:08:42.379 07:35:33 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.379 07:35:33 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.379 07:35:33 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.379 07:35:33 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.379 07:35:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.379 [2024-07-15 07:35:33.431184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:42.379 [2024-07-15 07:35:33.431340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid963930 ] 00:08:42.379 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.379 [2024-07-15 07:35:33.554163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.638 [2024-07-15 07:35:33.782739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.572 07:35:34 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.572 07:35:34 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:43.572 07:35:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:43.829 { 00:08:43.829 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:43.829 "fields": { 00:08:43.829 "major": 24, 00:08:43.829 "minor": 9, 00:08:43.829 "patch": 0, 00:08:43.829 "suffix": "-pre", 00:08:43.829 "commit": "719d03c6a" 00:08:43.829 } 00:08:43.829 } 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:43.829 07:35:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.829 07:35:34 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:44.089 request: 00:08:44.089 { 00:08:44.089 "method": "env_dpdk_get_mem_stats", 00:08:44.089 "req_id": 1 00:08:44.089 } 00:08:44.089 Got JSON-RPC error response 00:08:44.089 response: 00:08:44.089 { 00:08:44.089 "code": -32601, 00:08:44.089 "message": "Method not found" 00:08:44.089 } 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:44.089 07:35:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 963930 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 963930 ']' 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 963930 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 963930 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 963930' 00:08:44.089 killing process with pid 963930 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@967 -- # kill 963930 00:08:44.089 07:35:35 app_cmdline -- common/autotest_common.sh@972 -- # wait 963930 00:08:46.627 00:08:46.627 real 0m4.510s 00:08:46.627 user 0m4.874s 00:08:46.627 sys 0m0.662s 00:08:46.627 07:35:37 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.627 07:35:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:46.627 ************************************ 00:08:46.627 END TEST app_cmdline 00:08:46.627 ************************************ 00:08:46.627 07:35:37 -- common/autotest_common.sh@1142 -- # return 0 00:08:46.627 07:35:37 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:46.627 07:35:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:46.627 07:35:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.627 07:35:37 -- common/autotest_common.sh@10 -- # set +x 00:08:46.627 ************************************ 00:08:46.627 START TEST version 00:08:46.627 ************************************ 00:08:46.627 07:35:37 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:46.886 * Looking for test storage... 00:08:46.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:46.886 07:35:37 version -- app/version.sh@17 -- # get_header_version major 00:08:46.886 07:35:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # cut -f2 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.886 07:35:37 version -- app/version.sh@17 -- # major=24 00:08:46.886 07:35:37 version -- app/version.sh@18 -- # get_header_version minor 00:08:46.886 07:35:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # cut -f2 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.886 07:35:37 version -- app/version.sh@18 -- # minor=9 00:08:46.886 07:35:37 version -- app/version.sh@19 -- # get_header_version patch 00:08:46.886 07:35:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # cut -f2 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.886 07:35:37 version -- app/version.sh@19 -- # patch=0 00:08:46.886 07:35:37 version -- app/version.sh@20 -- # get_header_version suffix 00:08:46.886 07:35:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # cut -f2 00:08:46.886 07:35:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.886 07:35:37 version -- app/version.sh@20 -- # suffix=-pre 00:08:46.886 07:35:37 version -- app/version.sh@22 -- # version=24.9 00:08:46.886 07:35:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:46.886 07:35:37 version -- app/version.sh@28 -- # version=24.9rc0 00:08:46.886 07:35:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:46.886 07:35:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:46.886 07:35:37 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:46.886 07:35:37 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:46.886 00:08:46.886 real 0m0.104s 00:08:46.886 user 0m0.062s 00:08:46.886 sys 0m0.064s 00:08:46.886 07:35:37 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.886 07:35:37 version -- common/autotest_common.sh@10 -- # set +x 00:08:46.886 ************************************ 00:08:46.886 END TEST version 00:08:46.886 ************************************ 00:08:46.886 07:35:37 -- common/autotest_common.sh@1142 -- # return 0 00:08:46.886 07:35:37 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:46.886 07:35:37 -- spdk/autotest.sh@198 -- # uname -s 00:08:46.886 07:35:37 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:46.886 07:35:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:46.886 07:35:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:46.886 07:35:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:46.886 07:35:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:46.886 07:35:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:46.886 07:35:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.886 07:35:37 -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 07:35:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:46.887 07:35:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:46.887 07:35:37 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:46.887 07:35:37 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:46.887 07:35:37 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:46.887 07:35:37 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:46.887 07:35:37 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:46.887 07:35:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:46.887 07:35:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.887 07:35:37 -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 ************************************ 00:08:46.887 START TEST nvmf_tcp 00:08:46.887 ************************************ 00:08:46.887 07:35:38 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:46.887 * Looking for test storage... 00:08:46.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.887 07:35:38 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.887 07:35:38 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.887 07:35:38 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.887 07:35:38 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.887 07:35:38 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.887 07:35:38 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.887 07:35:38 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:46.887 07:35:38 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:46.887 07:35:38 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.887 07:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:46.887 07:35:38 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:46.887 07:35:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:46.887 07:35:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.887 07:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.146 ************************************ 00:08:47.146 START TEST nvmf_example 00:08:47.146 ************************************ 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:47.146 * Looking for test storage... 00:08:47.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.146 07:35:38 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.147 07:35:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.054 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:08:49.055 00:08:49.055 --- 10.0.0.2 ping statistics --- 00:08:49.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.055 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:08:49.055 00:08:49.055 --- 10.0.0.1 ping statistics --- 00:08:49.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.055 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.055 07:35:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=966229 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 966229 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 966229 ']' 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.315 07:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.252 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:50.510 07:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:50.510 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.702 Initializing NVMe Controllers 00:09:02.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:02.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:02.702 Initialization complete. Launching workers. 00:09:02.702 ======================================================== 00:09:02.702 Latency(us) 00:09:02.702 Device Information : IOPS MiB/s Average min max 00:09:02.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11959.90 46.72 5352.43 1230.15 15766.62 00:09:02.702 ======================================================== 00:09:02.702 Total : 11959.90 46.72 5352.43 1230.15 15766.62 00:09:02.702 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.702 rmmod nvme_tcp 00:09:02.702 rmmod nvme_fabrics 00:09:02.702 rmmod nvme_keyring 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 966229 ']' 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 966229 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 966229 ']' 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 966229 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 966229 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 966229' 00:09:02.702 killing process with pid 966229 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 966229 00:09:02.702 07:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 966229 00:09:02.702 nvmf threads initialize successfully 00:09:02.702 bdev subsystem init successfully 00:09:02.702 created a nvmf target service 00:09:02.702 create targets's poll groups done 00:09:02.702 all subsystems of target started 00:09:02.702 nvmf target is running 00:09:02.702 all subsystems of target stopped 00:09:02.702 destroy targets's poll groups done 00:09:02.702 destroyed the nvmf target service 00:09:02.702 bdev subsystem finish successfully 00:09:02.702 nvmf threads destroy successfully 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.702 07:35:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.082 07:35:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.082 07:35:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:04.082 07:35:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.082 07:35:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:04.082 00:09:04.082 real 0m17.017s 00:09:04.082 user 0m48.001s 00:09:04.082 sys 0m3.265s 00:09:04.082 07:35:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.082 07:35:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:04.082 ************************************ 00:09:04.082 END TEST nvmf_example 00:09:04.082 ************************************ 00:09:04.082 07:35:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.082 07:35:55 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:04.082 07:35:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.082 07:35:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.082 07:35:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.082 ************************************ 00:09:04.082 START TEST nvmf_filesystem 00:09:04.082 ************************************ 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:04.082 * Looking for test storage... 00:09:04.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:04.082 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:04.083 #define SPDK_CONFIG_H 00:09:04.083 #define SPDK_CONFIG_APPS 1 00:09:04.083 #define SPDK_CONFIG_ARCH native 00:09:04.083 #define SPDK_CONFIG_ASAN 1 00:09:04.083 #undef SPDK_CONFIG_AVAHI 00:09:04.083 #undef SPDK_CONFIG_CET 00:09:04.083 #define SPDK_CONFIG_COVERAGE 1 00:09:04.083 #define SPDK_CONFIG_CROSS_PREFIX 00:09:04.083 #undef SPDK_CONFIG_CRYPTO 00:09:04.083 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:04.083 #undef SPDK_CONFIG_CUSTOMOCF 00:09:04.083 #undef SPDK_CONFIG_DAOS 00:09:04.083 #define SPDK_CONFIG_DAOS_DIR 00:09:04.083 #define SPDK_CONFIG_DEBUG 1 00:09:04.083 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:04.083 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:04.083 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:04.083 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:04.083 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:04.083 #undef SPDK_CONFIG_DPDK_UADK 00:09:04.083 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:04.083 #define SPDK_CONFIG_EXAMPLES 1 00:09:04.083 #undef SPDK_CONFIG_FC 00:09:04.083 #define SPDK_CONFIG_FC_PATH 00:09:04.083 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:04.083 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:04.083 #undef SPDK_CONFIG_FUSE 00:09:04.083 #undef SPDK_CONFIG_FUZZER 00:09:04.083 #define SPDK_CONFIG_FUZZER_LIB 00:09:04.083 #undef SPDK_CONFIG_GOLANG 00:09:04.083 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:04.083 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:04.083 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:04.083 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:04.083 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:04.083 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:04.083 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:04.083 #define SPDK_CONFIG_IDXD 1 00:09:04.083 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:04.083 #undef SPDK_CONFIG_IPSEC_MB 00:09:04.083 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:04.083 #define SPDK_CONFIG_ISAL 1 00:09:04.083 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:04.083 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:04.083 #define SPDK_CONFIG_LIBDIR 00:09:04.083 #undef SPDK_CONFIG_LTO 00:09:04.083 #define SPDK_CONFIG_MAX_LCORES 128 00:09:04.083 #define SPDK_CONFIG_NVME_CUSE 1 00:09:04.083 #undef SPDK_CONFIG_OCF 00:09:04.083 #define SPDK_CONFIG_OCF_PATH 00:09:04.083 #define SPDK_CONFIG_OPENSSL_PATH 00:09:04.083 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:04.083 #define SPDK_CONFIG_PGO_DIR 00:09:04.083 #undef SPDK_CONFIG_PGO_USE 00:09:04.083 #define SPDK_CONFIG_PREFIX /usr/local 00:09:04.083 #undef SPDK_CONFIG_RAID5F 00:09:04.083 #undef SPDK_CONFIG_RBD 00:09:04.083 #define SPDK_CONFIG_RDMA 1 00:09:04.083 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:04.083 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:04.083 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:04.083 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:04.083 #define SPDK_CONFIG_SHARED 1 00:09:04.083 #undef SPDK_CONFIG_SMA 00:09:04.083 #define SPDK_CONFIG_TESTS 1 00:09:04.083 #undef SPDK_CONFIG_TSAN 00:09:04.083 #define SPDK_CONFIG_UBLK 1 00:09:04.083 #define SPDK_CONFIG_UBSAN 1 00:09:04.083 #undef SPDK_CONFIG_UNIT_TESTS 00:09:04.083 #undef SPDK_CONFIG_URING 00:09:04.083 #define SPDK_CONFIG_URING_PATH 00:09:04.083 #undef SPDK_CONFIG_URING_ZNS 00:09:04.083 #undef SPDK_CONFIG_USDT 00:09:04.083 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:04.083 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:04.083 #undef SPDK_CONFIG_VFIO_USER 00:09:04.083 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:04.083 #define SPDK_CONFIG_VHOST 1 00:09:04.083 #define SPDK_CONFIG_VIRTIO 1 00:09:04.083 #undef SPDK_CONFIG_VTUNE 00:09:04.083 #define SPDK_CONFIG_VTUNE_DIR 00:09:04.083 #define SPDK_CONFIG_WERROR 1 00:09:04.083 #define SPDK_CONFIG_WPDK_DIR 00:09:04.083 #undef SPDK_CONFIG_XNVME 00:09:04.083 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:04.083 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:04.084 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 968068 ]] 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 968068 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.y07IsY 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.y07IsY/tests/target /tmp/spdk.y07IsY 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55294443520 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6700249088 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:09:04.085 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996520960 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=827392 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:04.086 * Looking for test storage... 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:04.086 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55294443520 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8914841600 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.344 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.345 07:35:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:06.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.255 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:06.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:06.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:06.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:09:06.256 00:09:06.256 --- 10.0.0.2 ping statistics --- 00:09:06.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.256 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:09:06.256 00:09:06.256 --- 10.0.0.1 ping statistics --- 00:09:06.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.256 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.256 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.566 ************************************ 00:09:06.566 START TEST nvmf_filesystem_no_in_capsule 00:09:06.566 ************************************ 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=969693 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 969693 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 969693 ']' 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.566 07:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.566 [2024-07-15 07:35:57.617683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:06.566 [2024-07-15 07:35:57.617810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.566 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.566 [2024-07-15 07:35:57.759400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.824 [2024-07-15 07:35:58.027532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.824 [2024-07-15 07:35:58.027605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.824 [2024-07-15 07:35:58.027633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.824 [2024-07-15 07:35:58.027654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.824 [2024-07-15 07:35:58.027676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.824 [2024-07-15 07:35:58.027802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.824 [2024-07-15 07:35:58.027859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.824 [2024-07-15 07:35:58.028303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.824 [2024-07-15 07:35:58.028308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.395 [2024-07-15 07:35:58.580020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.395 07:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.962 Malloc1 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.962 [2024-07-15 07:35:59.166171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:07.962 { 00:09:07.962 "name": "Malloc1", 00:09:07.962 "aliases": [ 00:09:07.962 "0779cceb-19d5-42f7-8556-94405971d9e4" 00:09:07.962 ], 00:09:07.962 "product_name": "Malloc disk", 00:09:07.962 "block_size": 512, 00:09:07.962 "num_blocks": 1048576, 00:09:07.962 "uuid": "0779cceb-19d5-42f7-8556-94405971d9e4", 00:09:07.962 "assigned_rate_limits": { 00:09:07.962 "rw_ios_per_sec": 0, 00:09:07.962 "rw_mbytes_per_sec": 0, 00:09:07.962 "r_mbytes_per_sec": 0, 00:09:07.962 "w_mbytes_per_sec": 0 00:09:07.962 }, 00:09:07.962 "claimed": true, 00:09:07.962 "claim_type": "exclusive_write", 00:09:07.962 "zoned": false, 00:09:07.962 "supported_io_types": { 00:09:07.962 "read": true, 00:09:07.962 "write": true, 00:09:07.962 "unmap": true, 00:09:07.962 "flush": true, 00:09:07.962 "reset": true, 00:09:07.962 "nvme_admin": false, 00:09:07.962 "nvme_io": false, 00:09:07.962 "nvme_io_md": false, 00:09:07.962 "write_zeroes": true, 00:09:07.962 "zcopy": true, 00:09:07.962 "get_zone_info": false, 00:09:07.962 "zone_management": false, 00:09:07.962 "zone_append": false, 00:09:07.962 "compare": false, 00:09:07.962 "compare_and_write": false, 00:09:07.962 "abort": true, 00:09:07.962 "seek_hole": false, 00:09:07.962 "seek_data": false, 00:09:07.962 "copy": true, 00:09:07.962 "nvme_iov_md": false 00:09:07.962 }, 00:09:07.962 "memory_domains": [ 00:09:07.962 { 00:09:07.962 "dma_device_id": "system", 00:09:07.962 "dma_device_type": 1 00:09:07.962 }, 00:09:07.962 { 00:09:07.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.962 "dma_device_type": 2 00:09:07.962 } 00:09:07.962 ], 00:09:07.962 "driver_specific": {} 00:09:07.962 } 00:09:07.962 ]' 00:09:07.962 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:08.221 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:08.222 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:08.222 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:08.222 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:08.222 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:08.222 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:08.222 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.789 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.789 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:08.789 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.789 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:08.789 07:35:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:10.687 07:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:10.949 07:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:11.208 07:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.585 ************************************ 00:09:12.585 START TEST filesystem_ext4 00:09:12.585 ************************************ 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:12.585 07:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:12.585 mke2fs 1.46.5 (30-Dec-2021) 00:09:12.585 Discarding device blocks: 0/522240 done 00:09:12.585 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:12.585 Filesystem UUID: 88ea4425-b984-4228-ad20-af2e9bf1b928 00:09:12.585 Superblock backups stored on blocks: 00:09:12.585 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:12.585 00:09:12.585 Allocating group tables: 0/64 done 00:09:12.585 Writing inode tables: 0/64 done 00:09:13.962 Creating journal (8192 blocks): done 00:09:13.962 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:09:13.962 00:09:13.962 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:13.962 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:14.529 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.529 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:14.529 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.529 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:14.529 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:14.529 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 969693 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.787 00:09:14.787 real 0m2.401s 00:09:14.787 user 0m0.021s 00:09:14.787 sys 0m0.057s 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:14.787 ************************************ 00:09:14.787 END TEST filesystem_ext4 00:09:14.787 ************************************ 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.787 ************************************ 00:09:14.787 START TEST filesystem_btrfs 00:09:14.787 ************************************ 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:14.787 07:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:15.045 btrfs-progs v6.6.2 00:09:15.045 See https://btrfs.readthedocs.io for more information. 00:09:15.045 00:09:15.045 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:15.045 NOTE: several default settings have changed in version 5.15, please make sure 00:09:15.045 this does not affect your deployments: 00:09:15.045 - DUP for metadata (-m dup) 00:09:15.045 - enabled no-holes (-O no-holes) 00:09:15.045 - enabled free-space-tree (-R free-space-tree) 00:09:15.045 00:09:15.045 Label: (null) 00:09:15.045 UUID: 19a9016d-692a-4099-a6cb-e3b47be03f58 00:09:15.045 Node size: 16384 00:09:15.045 Sector size: 4096 00:09:15.045 Filesystem size: 510.00MiB 00:09:15.045 Block group profiles: 00:09:15.045 Data: single 8.00MiB 00:09:15.045 Metadata: DUP 32.00MiB 00:09:15.045 System: DUP 8.00MiB 00:09:15.045 SSD detected: yes 00:09:15.045 Zoned device: no 00:09:15.045 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:15.045 Runtime features: free-space-tree 00:09:15.045 Checksum: crc32c 00:09:15.045 Number of devices: 1 00:09:15.045 Devices: 00:09:15.045 ID SIZE PATH 00:09:15.045 1 510.00MiB /dev/nvme0n1p1 00:09:15.045 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 969693 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:15.045 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:15.303 00:09:15.303 real 0m0.425s 00:09:15.303 user 0m0.015s 00:09:15.303 sys 0m0.112s 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:15.303 ************************************ 00:09:15.303 END TEST filesystem_btrfs 00:09:15.303 ************************************ 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.303 ************************************ 00:09:15.303 START TEST filesystem_xfs 00:09:15.303 ************************************ 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:15.303 07:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:15.303 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:15.303 = sectsz=512 attr=2, projid32bit=1 00:09:15.303 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:15.303 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:15.303 data = bsize=4096 blocks=130560, imaxpct=25 00:09:15.303 = sunit=0 swidth=0 blks 00:09:15.303 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:15.303 log =internal log bsize=4096 blocks=16384, version=2 00:09:15.303 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:15.303 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:16.238 Discarding blocks...Done. 00:09:16.238 07:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:16.238 07:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:18.142 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:18.142 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:18.142 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 969693 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:18.143 00:09:18.143 real 0m2.961s 00:09:18.143 user 0m0.016s 00:09:18.143 sys 0m0.050s 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:18.143 ************************************ 00:09:18.143 END TEST filesystem_xfs 00:09:18.143 ************************************ 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:18.143 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:18.401 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:18.401 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.660 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.660 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 969693 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 969693 ']' 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 969693 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 969693 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 969693' 00:09:18.661 killing process with pid 969693 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 969693 00:09:18.661 07:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 969693 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:21.192 00:09:21.192 real 0m14.803s 00:09:21.192 user 0m54.614s 00:09:21.192 sys 0m2.013s 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.192 ************************************ 00:09:21.192 END TEST nvmf_filesystem_no_in_capsule 00:09:21.192 ************************************ 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:21.192 ************************************ 00:09:21.192 START TEST nvmf_filesystem_in_capsule 00:09:21.192 ************************************ 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=972267 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 972267 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 972267 ']' 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.192 07:36:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.452 [2024-07-15 07:36:12.475416] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:21.452 [2024-07-15 07:36:12.475557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.452 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.452 [2024-07-15 07:36:12.610991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.715 [2024-07-15 07:36:12.855008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.715 [2024-07-15 07:36:12.855067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.715 [2024-07-15 07:36:12.855091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.715 [2024-07-15 07:36:12.855109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.715 [2024-07-15 07:36:12.855127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.715 [2024-07-15 07:36:12.855292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.715 [2024-07-15 07:36:12.856914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.715 [2024-07-15 07:36:12.856980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.715 [2024-07-15 07:36:12.856989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.328 [2024-07-15 07:36:13.471357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.328 07:36:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.892 Malloc1 00:09:22.892 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.893 [2024-07-15 07:36:14.048594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:22.893 { 00:09:22.893 "name": "Malloc1", 00:09:22.893 "aliases": [ 00:09:22.893 "60c7ecdc-5c59-4702-b6f3-0f00c8492733" 00:09:22.893 ], 00:09:22.893 "product_name": "Malloc disk", 00:09:22.893 "block_size": 512, 00:09:22.893 "num_blocks": 1048576, 00:09:22.893 "uuid": "60c7ecdc-5c59-4702-b6f3-0f00c8492733", 00:09:22.893 "assigned_rate_limits": { 00:09:22.893 "rw_ios_per_sec": 0, 00:09:22.893 "rw_mbytes_per_sec": 0, 00:09:22.893 "r_mbytes_per_sec": 0, 00:09:22.893 "w_mbytes_per_sec": 0 00:09:22.893 }, 00:09:22.893 "claimed": true, 00:09:22.893 "claim_type": "exclusive_write", 00:09:22.893 "zoned": false, 00:09:22.893 "supported_io_types": { 00:09:22.893 "read": true, 00:09:22.893 "write": true, 00:09:22.893 "unmap": true, 00:09:22.893 "flush": true, 00:09:22.893 "reset": true, 00:09:22.893 "nvme_admin": false, 00:09:22.893 "nvme_io": false, 00:09:22.893 "nvme_io_md": false, 00:09:22.893 "write_zeroes": true, 00:09:22.893 "zcopy": true, 00:09:22.893 "get_zone_info": false, 00:09:22.893 "zone_management": false, 00:09:22.893 "zone_append": false, 00:09:22.893 "compare": false, 00:09:22.893 "compare_and_write": false, 00:09:22.893 "abort": true, 00:09:22.893 "seek_hole": false, 00:09:22.893 "seek_data": false, 00:09:22.893 "copy": true, 00:09:22.893 "nvme_iov_md": false 00:09:22.893 }, 00:09:22.893 "memory_domains": [ 00:09:22.893 { 00:09:22.893 "dma_device_id": "system", 00:09:22.893 "dma_device_type": 1 00:09:22.893 }, 00:09:22.893 { 00:09:22.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.893 "dma_device_type": 2 00:09:22.893 } 00:09:22.893 ], 00:09:22.893 "driver_specific": {} 00:09:22.893 } 00:09:22.893 ]' 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:22.893 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:23.150 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:23.151 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:23.151 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:23.151 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:23.151 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.758 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.758 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.758 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.758 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.758 07:36:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:25.656 07:36:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:26.221 07:36:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:26.479 07:36:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.410 ************************************ 00:09:27.410 START TEST filesystem_in_capsule_ext4 00:09:27.410 ************************************ 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:27.410 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:27.410 mke2fs 1.46.5 (30-Dec-2021) 00:09:27.667 Discarding device blocks: 0/522240 done 00:09:27.667 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:27.667 Filesystem UUID: 5f931563-1984-4d7c-b193-5ee5e2a52233 00:09:27.667 Superblock backups stored on blocks: 00:09:27.667 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:27.667 00:09:27.667 Allocating group tables: 0/64 done 00:09:27.667 Writing inode tables: 0/64 done 00:09:27.925 Creating journal (8192 blocks): done 00:09:27.925 Writing superblocks and filesystem accounting information: 0/64 done 00:09:27.925 00:09:27.925 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:27.925 07:36:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.925 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.925 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:27.925 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.925 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:27.925 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 972267 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:28.183 00:09:28.183 real 0m0.647s 00:09:28.183 user 0m0.020s 00:09:28.183 sys 0m0.048s 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 ************************************ 00:09:28.183 END TEST filesystem_in_capsule_ext4 00:09:28.183 ************************************ 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.183 ************************************ 00:09:28.183 START TEST filesystem_in_capsule_btrfs 00:09:28.183 ************************************ 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:28.183 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:28.440 btrfs-progs v6.6.2 00:09:28.440 See https://btrfs.readthedocs.io for more information. 00:09:28.440 00:09:28.440 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:28.440 NOTE: several default settings have changed in version 5.15, please make sure 00:09:28.440 this does not affect your deployments: 00:09:28.440 - DUP for metadata (-m dup) 00:09:28.440 - enabled no-holes (-O no-holes) 00:09:28.440 - enabled free-space-tree (-R free-space-tree) 00:09:28.440 00:09:28.440 Label: (null) 00:09:28.440 UUID: 07dde998-a989-4972-970b-12e381440784 00:09:28.440 Node size: 16384 00:09:28.440 Sector size: 4096 00:09:28.440 Filesystem size: 510.00MiB 00:09:28.440 Block group profiles: 00:09:28.440 Data: single 8.00MiB 00:09:28.440 Metadata: DUP 32.00MiB 00:09:28.440 System: DUP 8.00MiB 00:09:28.440 SSD detected: yes 00:09:28.440 Zoned device: no 00:09:28.440 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:28.440 Runtime features: free-space-tree 00:09:28.440 Checksum: crc32c 00:09:28.440 Number of devices: 1 00:09:28.440 Devices: 00:09:28.440 ID SIZE PATH 00:09:28.440 1 510.00MiB /dev/nvme0n1p1 00:09:28.440 00:09:28.440 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:28.440 07:36:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 972267 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:29.373 00:09:29.373 real 0m1.127s 00:09:29.373 user 0m0.026s 00:09:29.373 sys 0m0.111s 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:29.373 ************************************ 00:09:29.373 END TEST filesystem_in_capsule_btrfs 00:09:29.373 ************************************ 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.373 ************************************ 00:09:29.373 START TEST filesystem_in_capsule_xfs 00:09:29.373 ************************************ 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:29.373 07:36:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:29.373 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:29.373 = sectsz=512 attr=2, projid32bit=1 00:09:29.373 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:29.373 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:29.373 data = bsize=4096 blocks=130560, imaxpct=25 00:09:29.373 = sunit=0 swidth=0 blks 00:09:29.373 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:29.373 log =internal log bsize=4096 blocks=16384, version=2 00:09:29.373 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:29.373 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:30.305 Discarding blocks...Done. 00:09:30.305 07:36:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:30.305 07:36:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 972267 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:32.834 00:09:32.834 real 0m3.443s 00:09:32.834 user 0m0.021s 00:09:32.834 sys 0m0.057s 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:32.834 ************************************ 00:09:32.834 END TEST filesystem_in_capsule_xfs 00:09:32.834 ************************************ 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:32.834 07:36:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 972267 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 972267 ']' 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 972267 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972267 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972267' 00:09:33.092 killing process with pid 972267 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 972267 00:09:33.092 07:36:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 972267 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:35.618 00:09:35.618 real 0m14.373s 00:09:35.618 user 0m53.111s 00:09:35.618 sys 0m1.914s 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.618 ************************************ 00:09:35.618 END TEST nvmf_filesystem_in_capsule 00:09:35.618 ************************************ 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.618 rmmod nvme_tcp 00:09:35.618 rmmod nvme_fabrics 00:09:35.618 rmmod nvme_keyring 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.618 07:36:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.156 07:36:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.156 00:09:38.156 real 0m33.688s 00:09:38.156 user 1m48.641s 00:09:38.156 sys 0m5.527s 00:09:38.156 07:36:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.156 07:36:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.156 ************************************ 00:09:38.156 END TEST nvmf_filesystem 00:09:38.156 ************************************ 00:09:38.156 07:36:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:38.156 07:36:28 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:38.156 07:36:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:38.156 07:36:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.156 07:36:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.156 ************************************ 00:09:38.156 START TEST nvmf_target_discovery 00:09:38.156 ************************************ 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:38.156 * Looking for test storage... 00:09:38.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.156 07:36:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:40.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:40.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:40.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:40.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.061 07:36:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:09:40.062 00:09:40.062 --- 10.0.0.2 ping statistics --- 00:09:40.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.062 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:09:40.062 00:09:40.062 --- 10.0.0.1 ping statistics --- 00:09:40.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.062 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=976047 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 976047 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 976047 ']' 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.062 07:36:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.062 [2024-07-15 07:36:31.215611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:40.062 [2024-07-15 07:36:31.215773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.320 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.320 [2024-07-15 07:36:31.360794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.579 [2024-07-15 07:36:31.627957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.579 [2024-07-15 07:36:31.628041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.579 [2024-07-15 07:36:31.628070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.579 [2024-07-15 07:36:31.628092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.579 [2024-07-15 07:36:31.628113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.579 [2024-07-15 07:36:31.628242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.579 [2024-07-15 07:36:31.628303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.579 [2024-07-15 07:36:31.628354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.579 [2024-07-15 07:36:31.628366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 [2024-07-15 07:36:32.155458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 Null1 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 [2024-07-15 07:36:32.197100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 Null2 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 Null3 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 Null4 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.147 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:41.406 00:09:41.406 Discovery Log Number of Records 6, Generation counter 6 00:09:41.406 =====Discovery Log Entry 0====== 00:09:41.406 trtype: tcp 00:09:41.406 adrfam: ipv4 00:09:41.406 subtype: current discovery subsystem 00:09:41.406 treq: not required 00:09:41.406 portid: 0 00:09:41.406 trsvcid: 4420 00:09:41.406 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:41.406 traddr: 10.0.0.2 00:09:41.406 eflags: explicit discovery connections, duplicate discovery information 00:09:41.406 sectype: none 00:09:41.406 =====Discovery Log Entry 1====== 00:09:41.406 trtype: tcp 00:09:41.406 adrfam: ipv4 00:09:41.406 subtype: nvme subsystem 00:09:41.406 treq: not required 00:09:41.406 portid: 0 00:09:41.406 trsvcid: 4420 00:09:41.406 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:41.406 traddr: 10.0.0.2 00:09:41.406 eflags: none 00:09:41.407 sectype: none 00:09:41.407 =====Discovery Log Entry 2====== 00:09:41.407 trtype: tcp 00:09:41.407 adrfam: ipv4 00:09:41.407 subtype: nvme subsystem 00:09:41.407 treq: not required 00:09:41.407 portid: 0 00:09:41.407 trsvcid: 4420 00:09:41.407 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:41.407 traddr: 10.0.0.2 00:09:41.407 eflags: none 00:09:41.407 sectype: none 00:09:41.407 =====Discovery Log Entry 3====== 00:09:41.407 trtype: tcp 00:09:41.407 adrfam: ipv4 00:09:41.407 subtype: nvme subsystem 00:09:41.407 treq: not required 00:09:41.407 portid: 0 00:09:41.407 trsvcid: 4420 00:09:41.407 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:41.407 traddr: 10.0.0.2 00:09:41.407 eflags: none 00:09:41.407 sectype: none 00:09:41.407 =====Discovery Log Entry 4====== 00:09:41.407 trtype: tcp 00:09:41.407 adrfam: ipv4 00:09:41.407 subtype: nvme subsystem 00:09:41.407 treq: not required 00:09:41.407 portid: 0 00:09:41.407 trsvcid: 4420 00:09:41.407 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:41.407 traddr: 10.0.0.2 00:09:41.407 eflags: none 00:09:41.407 sectype: none 00:09:41.407 =====Discovery Log Entry 5====== 00:09:41.407 trtype: tcp 00:09:41.407 adrfam: ipv4 00:09:41.407 subtype: discovery subsystem referral 00:09:41.407 treq: not required 00:09:41.407 portid: 0 00:09:41.407 trsvcid: 4430 00:09:41.407 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:41.407 traddr: 10.0.0.2 00:09:41.407 eflags: none 00:09:41.407 sectype: none 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:41.407 Perform nvmf subsystem discovery via RPC 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 [ 00:09:41.407 { 00:09:41.407 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:41.407 "subtype": "Discovery", 00:09:41.407 "listen_addresses": [ 00:09:41.407 { 00:09:41.407 "trtype": "TCP", 00:09:41.407 "adrfam": "IPv4", 00:09:41.407 "traddr": "10.0.0.2", 00:09:41.407 "trsvcid": "4420" 00:09:41.407 } 00:09:41.407 ], 00:09:41.407 "allow_any_host": true, 00:09:41.407 "hosts": [] 00:09:41.407 }, 00:09:41.407 { 00:09:41.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.407 "subtype": "NVMe", 00:09:41.407 "listen_addresses": [ 00:09:41.407 { 00:09:41.407 "trtype": "TCP", 00:09:41.407 "adrfam": "IPv4", 00:09:41.407 "traddr": "10.0.0.2", 00:09:41.407 "trsvcid": "4420" 00:09:41.407 } 00:09:41.407 ], 00:09:41.407 "allow_any_host": true, 00:09:41.407 "hosts": [], 00:09:41.407 "serial_number": "SPDK00000000000001", 00:09:41.407 "model_number": "SPDK bdev Controller", 00:09:41.407 "max_namespaces": 32, 00:09:41.407 "min_cntlid": 1, 00:09:41.407 "max_cntlid": 65519, 00:09:41.407 "namespaces": [ 00:09:41.407 { 00:09:41.407 "nsid": 1, 00:09:41.407 "bdev_name": "Null1", 00:09:41.407 "name": "Null1", 00:09:41.407 "nguid": "5F786D822F48408E9521856AB82A8EC0", 00:09:41.407 "uuid": "5f786d82-2f48-408e-9521-856ab82a8ec0" 00:09:41.407 } 00:09:41.407 ] 00:09:41.407 }, 00:09:41.407 { 00:09:41.407 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:41.407 "subtype": "NVMe", 00:09:41.407 "listen_addresses": [ 00:09:41.407 { 00:09:41.407 "trtype": "TCP", 00:09:41.407 "adrfam": "IPv4", 00:09:41.407 "traddr": "10.0.0.2", 00:09:41.407 "trsvcid": "4420" 00:09:41.407 } 00:09:41.407 ], 00:09:41.407 "allow_any_host": true, 00:09:41.407 "hosts": [], 00:09:41.407 "serial_number": "SPDK00000000000002", 00:09:41.407 "model_number": "SPDK bdev Controller", 00:09:41.407 "max_namespaces": 32, 00:09:41.407 "min_cntlid": 1, 00:09:41.407 "max_cntlid": 65519, 00:09:41.407 "namespaces": [ 00:09:41.407 { 00:09:41.407 "nsid": 1, 00:09:41.407 "bdev_name": "Null2", 00:09:41.407 "name": "Null2", 00:09:41.407 "nguid": "E8766368A27D4A2381DB9225DBE5F834", 00:09:41.407 "uuid": "e8766368-a27d-4a23-81db-9225dbe5f834" 00:09:41.407 } 00:09:41.407 ] 00:09:41.407 }, 00:09:41.407 { 00:09:41.407 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:41.407 "subtype": "NVMe", 00:09:41.407 "listen_addresses": [ 00:09:41.407 { 00:09:41.407 "trtype": "TCP", 00:09:41.407 "adrfam": "IPv4", 00:09:41.407 "traddr": "10.0.0.2", 00:09:41.407 "trsvcid": "4420" 00:09:41.407 } 00:09:41.407 ], 00:09:41.407 "allow_any_host": true, 00:09:41.407 "hosts": [], 00:09:41.407 "serial_number": "SPDK00000000000003", 00:09:41.407 "model_number": "SPDK bdev Controller", 00:09:41.407 "max_namespaces": 32, 00:09:41.407 "min_cntlid": 1, 00:09:41.407 "max_cntlid": 65519, 00:09:41.407 "namespaces": [ 00:09:41.407 { 00:09:41.407 "nsid": 1, 00:09:41.407 "bdev_name": "Null3", 00:09:41.407 "name": "Null3", 00:09:41.407 "nguid": "98D6DFB95F7F4B4A8ECBF88FC3B98C4B", 00:09:41.407 "uuid": "98d6dfb9-5f7f-4b4a-8ecb-f88fc3b98c4b" 00:09:41.407 } 00:09:41.407 ] 00:09:41.407 }, 00:09:41.407 { 00:09:41.407 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:41.407 "subtype": "NVMe", 00:09:41.407 "listen_addresses": [ 00:09:41.407 { 00:09:41.407 "trtype": "TCP", 00:09:41.407 "adrfam": "IPv4", 00:09:41.407 "traddr": "10.0.0.2", 00:09:41.407 "trsvcid": "4420" 00:09:41.407 } 00:09:41.407 ], 00:09:41.407 "allow_any_host": true, 00:09:41.407 "hosts": [], 00:09:41.407 "serial_number": "SPDK00000000000004", 00:09:41.407 "model_number": "SPDK bdev Controller", 00:09:41.407 "max_namespaces": 32, 00:09:41.407 "min_cntlid": 1, 00:09:41.407 "max_cntlid": 65519, 00:09:41.407 "namespaces": [ 00:09:41.407 { 00:09:41.407 "nsid": 1, 00:09:41.407 "bdev_name": "Null4", 00:09:41.407 "name": "Null4", 00:09:41.407 "nguid": "9721935D31E94251860B3AD73F23E4DA", 00:09:41.407 "uuid": "9721935d-31e9-4251-860b-3ad73f23e4da" 00:09:41.407 } 00:09:41.407 ] 00:09:41.407 } 00:09:41.407 ] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:41.407 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.408 rmmod nvme_tcp 00:09:41.408 rmmod nvme_fabrics 00:09:41.408 rmmod nvme_keyring 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 976047 ']' 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 976047 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 976047 ']' 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 976047 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.408 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 976047 00:09:41.666 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.666 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.666 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 976047' 00:09:41.666 killing process with pid 976047 00:09:41.666 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 976047 00:09:41.666 07:36:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 976047 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.040 07:36:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.942 07:36:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.942 00:09:44.942 real 0m7.045s 00:09:44.942 user 0m8.567s 00:09:44.942 sys 0m2.016s 00:09:44.942 07:36:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.942 07:36:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:44.942 ************************************ 00:09:44.942 END TEST nvmf_target_discovery 00:09:44.942 ************************************ 00:09:44.942 07:36:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.942 07:36:35 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:44.942 07:36:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.942 07:36:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.942 07:36:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.942 ************************************ 00:09:44.942 START TEST nvmf_referrals 00:09:44.942 ************************************ 00:09:44.942 07:36:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:44.942 * Looking for test storage... 00:09:44.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.943 07:36:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:46.844 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:46.844 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:46.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:46.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:46.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.845 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:09:47.105 00:09:47.105 --- 10.0.0.2 ping statistics --- 00:09:47.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.105 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:47.105 00:09:47.105 --- 10.0.0.1 ping statistics --- 00:09:47.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.105 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=978367 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 978367 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 978367 ']' 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.105 07:36:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.105 [2024-07-15 07:36:38.270946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:47.105 [2024-07-15 07:36:38.271094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.364 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.364 [2024-07-15 07:36:38.414621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.623 [2024-07-15 07:36:38.665940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.623 [2024-07-15 07:36:38.666016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.623 [2024-07-15 07:36:38.666056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.623 [2024-07-15 07:36:38.666074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.623 [2024-07-15 07:36:38.666093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.623 [2024-07-15 07:36:38.666211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.623 [2024-07-15 07:36:38.666277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.623 [2024-07-15 07:36:38.666318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.623 [2024-07-15 07:36:38.666329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 [2024-07-15 07:36:39.245456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 [2024-07-15 07:36:39.259053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:48.190 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.191 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.450 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.709 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:48.966 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:48.966 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:48.966 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:48.966 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:48.966 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.967 07:36:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.967 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:49.225 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:49.485 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.746 rmmod nvme_tcp 00:09:49.746 rmmod nvme_fabrics 00:09:49.746 rmmod nvme_keyring 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 978367 ']' 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 978367 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 978367 ']' 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 978367 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 978367 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 978367' 00:09:49.746 killing process with pid 978367 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 978367 00:09:49.746 07:36:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 978367 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.158 07:36:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.076 07:36:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.076 00:09:53.076 real 0m8.190s 00:09:53.076 user 0m13.853s 00:09:53.076 sys 0m2.286s 00:09:53.076 07:36:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.076 07:36:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.076 ************************************ 00:09:53.076 END TEST nvmf_referrals 00:09:53.076 ************************************ 00:09:53.076 07:36:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:53.076 07:36:44 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:53.076 07:36:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.076 07:36:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.076 07:36:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.076 ************************************ 00:09:53.076 START TEST nvmf_connect_disconnect 00:09:53.076 ************************************ 00:09:53.076 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:53.334 * Looking for test storage... 00:09:53.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.334 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.335 07:36:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.237 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:55.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:55.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:55.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:55.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.238 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:09:55.498 00:09:55.498 --- 10.0.0.2 ping statistics --- 00:09:55.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.498 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:09:55.498 00:09:55.498 --- 10.0.0.1 ping statistics --- 00:09:55.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.498 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=980806 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 980806 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 980806 ']' 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.498 07:36:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.498 [2024-07-15 07:36:46.665439] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:55.498 [2024-07-15 07:36:46.665596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.756 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.756 [2024-07-15 07:36:46.814080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.015 [2024-07-15 07:36:47.082705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.015 [2024-07-15 07:36:47.082785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.015 [2024-07-15 07:36:47.082820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.015 [2024-07-15 07:36:47.082841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.015 [2024-07-15 07:36:47.082862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.015 [2024-07-15 07:36:47.082983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.015 [2024-07-15 07:36:47.083024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.015 [2024-07-15 07:36:47.083059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.015 [2024-07-15 07:36:47.083052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.582 [2024-07-15 07:36:47.664190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:56.582 [2024-07-15 07:36:47.766558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:56.582 07:36:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:59.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.722 rmmod nvme_tcp 00:13:51.722 rmmod nvme_fabrics 00:13:51.722 rmmod nvme_keyring 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 980806 ']' 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 980806 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 980806 ']' 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 980806 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980806 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980806' 00:13:51.722 killing process with pid 980806 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 980806 00:13:51.722 07:40:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 980806 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.689 07:40:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.221 07:40:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.221 00:13:55.221 real 4m1.696s 00:13:55.221 user 15m12.824s 00:13:55.221 sys 0m38.561s 00:13:55.221 07:40:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.221 07:40:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:55.221 ************************************ 00:13:55.221 END TEST nvmf_connect_disconnect 00:13:55.221 ************************************ 00:13:55.221 07:40:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.221 07:40:45 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:55.221 07:40:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.221 07:40:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.221 07:40:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.221 ************************************ 00:13:55.221 START TEST nvmf_multitarget 00:13:55.221 ************************************ 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:55.221 * Looking for test storage... 00:13:55.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.221 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.222 07:40:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:57.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:57.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:57.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:57.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.126 07:40:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:13:57.126 00:13:57.126 --- 10.0.0.2 ping statistics --- 00:13:57.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.126 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:57.126 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:57.126 00:13:57.126 --- 10.0.0.1 ping statistics --- 00:13:57.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.126 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1012463 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1012463 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1012463 ']' 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.127 07:40:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 [2024-07-15 07:40:48.216558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:57.127 [2024-07-15 07:40:48.216696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.127 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.385 [2024-07-15 07:40:48.357250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.644 [2024-07-15 07:40:48.616734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.644 [2024-07-15 07:40:48.616819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.644 [2024-07-15 07:40:48.616843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.644 [2024-07-15 07:40:48.616860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.644 [2024-07-15 07:40:48.616885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.644 [2024-07-15 07:40:48.617018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.644 [2024-07-15 07:40:48.617094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.644 [2024-07-15 07:40:48.617139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.644 [2024-07-15 07:40:48.617149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:58.209 "nvmf_tgt_1" 00:13:58.209 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:58.467 "nvmf_tgt_2" 00:13:58.467 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:58.467 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:58.467 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:58.467 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:58.725 true 00:13:58.725 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:58.725 true 00:13:58.725 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:58.725 07:40:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.983 rmmod nvme_tcp 00:13:58.983 rmmod nvme_fabrics 00:13:58.983 rmmod nvme_keyring 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1012463 ']' 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1012463 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1012463 ']' 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1012463 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1012463 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1012463' 00:13:58.983 killing process with pid 1012463 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1012463 00:13:58.983 07:40:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1012463 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.362 07:40:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.264 07:40:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.264 00:14:02.264 real 0m7.426s 00:14:02.264 user 0m11.575s 00:14:02.264 sys 0m2.069s 00:14:02.264 07:40:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.264 07:40:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:02.264 ************************************ 00:14:02.264 END TEST nvmf_multitarget 00:14:02.264 ************************************ 00:14:02.264 07:40:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.264 07:40:53 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:02.264 07:40:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.264 07:40:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.264 07:40:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.264 ************************************ 00:14:02.264 START TEST nvmf_rpc 00:14:02.264 ************************************ 00:14:02.264 07:40:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:02.522 * Looking for test storage... 00:14:02.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.522 07:40:53 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.523 07:40:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.426 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:04.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:04.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:04.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:04.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:14:04.427 00:14:04.427 --- 10.0.0.2 ping statistics --- 00:14:04.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.427 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:14:04.427 00:14:04.427 --- 10.0.0.1 ping statistics --- 00:14:04.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.427 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.427 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1014817 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1014817 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1014817 ']' 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.687 07:40:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.687 [2024-07-15 07:40:55.746343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:04.687 [2024-07-15 07:40:55.746487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.687 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.687 [2024-07-15 07:40:55.890537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.948 [2024-07-15 07:40:56.158337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.948 [2024-07-15 07:40:56.158416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.948 [2024-07-15 07:40:56.158444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.948 [2024-07-15 07:40:56.158465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.948 [2024-07-15 07:40:56.158494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.948 [2024-07-15 07:40:56.158634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.948 [2024-07-15 07:40:56.158699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.948 [2024-07-15 07:40:56.158746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.948 [2024-07-15 07:40:56.158758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:05.515 "tick_rate": 2700000000, 00:14:05.515 "poll_groups": [ 00:14:05.515 { 00:14:05.515 "name": "nvmf_tgt_poll_group_000", 00:14:05.515 "admin_qpairs": 0, 00:14:05.515 "io_qpairs": 0, 00:14:05.515 "current_admin_qpairs": 0, 00:14:05.515 "current_io_qpairs": 0, 00:14:05.515 "pending_bdev_io": 0, 00:14:05.515 "completed_nvme_io": 0, 00:14:05.515 "transports": [] 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "name": "nvmf_tgt_poll_group_001", 00:14:05.515 "admin_qpairs": 0, 00:14:05.515 "io_qpairs": 0, 00:14:05.515 "current_admin_qpairs": 0, 00:14:05.515 "current_io_qpairs": 0, 00:14:05.515 "pending_bdev_io": 0, 00:14:05.515 "completed_nvme_io": 0, 00:14:05.515 "transports": [] 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "name": "nvmf_tgt_poll_group_002", 00:14:05.515 "admin_qpairs": 0, 00:14:05.515 "io_qpairs": 0, 00:14:05.515 "current_admin_qpairs": 0, 00:14:05.515 "current_io_qpairs": 0, 00:14:05.515 "pending_bdev_io": 0, 00:14:05.515 "completed_nvme_io": 0, 00:14:05.515 "transports": [] 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "name": "nvmf_tgt_poll_group_003", 00:14:05.515 "admin_qpairs": 0, 00:14:05.515 "io_qpairs": 0, 00:14:05.515 "current_admin_qpairs": 0, 00:14:05.515 "current_io_qpairs": 0, 00:14:05.515 "pending_bdev_io": 0, 00:14:05.515 "completed_nvme_io": 0, 00:14:05.515 "transports": [] 00:14:05.515 } 00:14:05.515 ] 00:14:05.515 }' 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:05.515 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.775 [2024-07-15 07:40:56.822642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.775 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:05.776 "tick_rate": 2700000000, 00:14:05.776 "poll_groups": [ 00:14:05.776 { 00:14:05.776 "name": "nvmf_tgt_poll_group_000", 00:14:05.776 "admin_qpairs": 0, 00:14:05.776 "io_qpairs": 0, 00:14:05.776 "current_admin_qpairs": 0, 00:14:05.776 "current_io_qpairs": 0, 00:14:05.776 "pending_bdev_io": 0, 00:14:05.776 "completed_nvme_io": 0, 00:14:05.776 "transports": [ 00:14:05.776 { 00:14:05.776 "trtype": "TCP" 00:14:05.776 } 00:14:05.776 ] 00:14:05.776 }, 00:14:05.776 { 00:14:05.776 "name": "nvmf_tgt_poll_group_001", 00:14:05.776 "admin_qpairs": 0, 00:14:05.776 "io_qpairs": 0, 00:14:05.776 "current_admin_qpairs": 0, 00:14:05.776 "current_io_qpairs": 0, 00:14:05.776 "pending_bdev_io": 0, 00:14:05.776 "completed_nvme_io": 0, 00:14:05.776 "transports": [ 00:14:05.776 { 00:14:05.776 "trtype": "TCP" 00:14:05.776 } 00:14:05.776 ] 00:14:05.776 }, 00:14:05.776 { 00:14:05.776 "name": "nvmf_tgt_poll_group_002", 00:14:05.776 "admin_qpairs": 0, 00:14:05.776 "io_qpairs": 0, 00:14:05.776 "current_admin_qpairs": 0, 00:14:05.776 "current_io_qpairs": 0, 00:14:05.776 "pending_bdev_io": 0, 00:14:05.776 "completed_nvme_io": 0, 00:14:05.776 "transports": [ 00:14:05.776 { 00:14:05.776 "trtype": "TCP" 00:14:05.776 } 00:14:05.776 ] 00:14:05.776 }, 00:14:05.776 { 00:14:05.776 "name": "nvmf_tgt_poll_group_003", 00:14:05.776 "admin_qpairs": 0, 00:14:05.776 "io_qpairs": 0, 00:14:05.776 "current_admin_qpairs": 0, 00:14:05.776 "current_io_qpairs": 0, 00:14:05.776 "pending_bdev_io": 0, 00:14:05.776 "completed_nvme_io": 0, 00:14:05.776 "transports": [ 00:14:05.776 { 00:14:05.776 "trtype": "TCP" 00:14:05.776 } 00:14:05.776 ] 00:14:05.776 } 00:14:05.776 ] 00:14:05.776 }' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.776 Malloc1 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.776 07:40:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:05.776 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.776 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 [2024-07-15 07:40:57.028318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:06.036 [2024-07-15 07:40:57.051576] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:14:06.036 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:06.036 could not add new controller: failed to write to nvme-fabrics device 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.036 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.603 07:40:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.603 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.603 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.603 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.603 07:40:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:09.138 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.139 [2024-07-15 07:40:59.940170] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:14:09.139 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:09.139 could not add new controller: failed to write to nvme-fabrics device 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.139 07:40:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.706 07:41:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.706 07:41:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.706 07:41:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.706 07:41:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:09.706 07:41:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:11.644 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.904 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.905 [2024-07-15 07:41:02.937768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.905 07:41:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.487 07:41:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.487 07:41:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:12.487 07:41:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.487 07:41:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:12.487 07:41:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:14.391 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 [2024-07-15 07:41:05.783911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.652 07:41:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.589 07:41:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.589 07:41:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:15.589 07:41:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.589 07:41:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:15.589 07:41:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 [2024-07-15 07:41:08.661199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.489 07:41:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.425 07:41:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.425 07:41:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:18.425 07:41:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.425 07:41:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:18.425 07:41:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:20.327 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 [2024-07-15 07:41:11.629261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.587 07:41:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.154 07:41:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.154 07:41:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:21.154 07:41:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.154 07:41:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:21.154 07:41:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:23.201 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.460 [2024-07-15 07:41:14.579317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.460 07:41:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.025 07:41:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.025 07:41:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.025 07:41:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.025 07:41:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:24.025 07:41:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 [2024-07-15 07:41:17.389223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 [2024-07-15 07:41:17.437211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 [2024-07-15 07:41:17.485388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 [2024-07-15 07:41:17.533503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 [2024-07-15 07:41:17.581681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.561 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:26.561 "tick_rate": 2700000000, 00:14:26.561 "poll_groups": [ 00:14:26.561 { 00:14:26.561 "name": "nvmf_tgt_poll_group_000", 00:14:26.561 "admin_qpairs": 2, 00:14:26.561 "io_qpairs": 84, 00:14:26.561 "current_admin_qpairs": 0, 00:14:26.561 "current_io_qpairs": 0, 00:14:26.561 "pending_bdev_io": 0, 00:14:26.561 "completed_nvme_io": 143, 00:14:26.561 "transports": [ 00:14:26.562 { 00:14:26.562 "trtype": "TCP" 00:14:26.562 } 00:14:26.562 ] 00:14:26.562 }, 00:14:26.562 { 00:14:26.562 "name": "nvmf_tgt_poll_group_001", 00:14:26.562 "admin_qpairs": 2, 00:14:26.562 "io_qpairs": 84, 00:14:26.562 "current_admin_qpairs": 0, 00:14:26.562 "current_io_qpairs": 0, 00:14:26.562 "pending_bdev_io": 0, 00:14:26.562 "completed_nvme_io": 234, 00:14:26.562 "transports": [ 00:14:26.562 { 00:14:26.562 "trtype": "TCP" 00:14:26.562 } 00:14:26.562 ] 00:14:26.562 }, 00:14:26.562 { 00:14:26.562 "name": "nvmf_tgt_poll_group_002", 00:14:26.562 "admin_qpairs": 1, 00:14:26.562 "io_qpairs": 84, 00:14:26.562 "current_admin_qpairs": 0, 00:14:26.562 "current_io_qpairs": 0, 00:14:26.562 "pending_bdev_io": 0, 00:14:26.562 "completed_nvme_io": 172, 00:14:26.562 "transports": [ 00:14:26.562 { 00:14:26.562 "trtype": "TCP" 00:14:26.562 } 00:14:26.562 ] 00:14:26.562 }, 00:14:26.562 { 00:14:26.562 "name": "nvmf_tgt_poll_group_003", 00:14:26.562 "admin_qpairs": 2, 00:14:26.562 "io_qpairs": 84, 00:14:26.562 "current_admin_qpairs": 0, 00:14:26.562 "current_io_qpairs": 0, 00:14:26.562 "pending_bdev_io": 0, 00:14:26.562 "completed_nvme_io": 137, 00:14:26.562 "transports": [ 00:14:26.562 { 00:14:26.562 "trtype": "TCP" 00:14:26.562 } 00:14:26.562 ] 00:14:26.562 } 00:14:26.562 ] 00:14:26.562 }' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.562 rmmod nvme_tcp 00:14:26.562 rmmod nvme_fabrics 00:14:26.562 rmmod nvme_keyring 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1014817 ']' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1014817 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1014817 ']' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1014817 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.562 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014817 00:14:26.820 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.820 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.820 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014817' 00:14:26.820 killing process with pid 1014817 00:14:26.820 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1014817 00:14:26.820 07:41:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1014817 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.196 07:41:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.137 07:41:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.137 00:14:30.137 real 0m27.857s 00:14:30.137 user 1m29.813s 00:14:30.137 sys 0m4.393s 00:14:30.137 07:41:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.137 07:41:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.137 ************************************ 00:14:30.137 END TEST nvmf_rpc 00:14:30.137 ************************************ 00:14:30.395 07:41:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.395 07:41:21 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:30.395 07:41:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.395 07:41:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.395 07:41:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.395 ************************************ 00:14:30.395 START TEST nvmf_invalid 00:14:30.395 ************************************ 00:14:30.395 07:41:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:30.395 * Looking for test storage... 00:14:30.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.396 07:41:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:32.307 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.307 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:32.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:32.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:32.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.308 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:14:32.566 00:14:32.566 --- 10.0.0.2 ping statistics --- 00:14:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.566 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:14:32.566 00:14:32.566 --- 10.0.0.1 ping statistics --- 00:14:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.566 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1019577 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1019577 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1019577 ']' 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.566 07:41:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.566 [2024-07-15 07:41:23.711838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:32.566 [2024-07-15 07:41:23.711994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.566 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.826 [2024-07-15 07:41:23.855544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.085 [2024-07-15 07:41:24.124501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.085 [2024-07-15 07:41:24.124576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.085 [2024-07-15 07:41:24.124605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.085 [2024-07-15 07:41:24.124627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.085 [2024-07-15 07:41:24.124662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.085 [2024-07-15 07:41:24.124790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.085 [2024-07-15 07:41:24.124847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.085 [2024-07-15 07:41:24.124911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.085 [2024-07-15 07:41:24.124921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:33.653 07:41:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode182 00:14:33.653 [2024-07-15 07:41:24.864775] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:33.913 07:41:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:33.913 { 00:14:33.913 "nqn": "nqn.2016-06.io.spdk:cnode182", 00:14:33.913 "tgt_name": "foobar", 00:14:33.913 "method": "nvmf_create_subsystem", 00:14:33.913 "req_id": 1 00:14:33.913 } 00:14:33.913 Got JSON-RPC error response 00:14:33.913 response: 00:14:33.913 { 00:14:33.913 "code": -32603, 00:14:33.913 "message": "Unable to find target foobar" 00:14:33.913 }' 00:14:33.913 07:41:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:33.913 { 00:14:33.913 "nqn": "nqn.2016-06.io.spdk:cnode182", 00:14:33.913 "tgt_name": "foobar", 00:14:33.913 "method": "nvmf_create_subsystem", 00:14:33.913 "req_id": 1 00:14:33.913 } 00:14:33.913 Got JSON-RPC error response 00:14:33.913 response: 00:14:33.913 { 00:14:33.913 "code": -32603, 00:14:33.913 "message": "Unable to find target foobar" 00:14:33.913 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:33.913 07:41:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:33.913 07:41:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19116 00:14:33.913 [2024-07-15 07:41:25.129745] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19116: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:34.172 { 00:14:34.172 "nqn": "nqn.2016-06.io.spdk:cnode19116", 00:14:34.172 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:34.172 "method": "nvmf_create_subsystem", 00:14:34.172 "req_id": 1 00:14:34.172 } 00:14:34.172 Got JSON-RPC error response 00:14:34.172 response: 00:14:34.172 { 00:14:34.172 "code": -32602, 00:14:34.172 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:34.172 }' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:34.172 { 00:14:34.172 "nqn": "nqn.2016-06.io.spdk:cnode19116", 00:14:34.172 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:34.172 "method": "nvmf_create_subsystem", 00:14:34.172 "req_id": 1 00:14:34.172 } 00:14:34.172 Got JSON-RPC error response 00:14:34.172 response: 00:14:34.172 { 00:14:34.172 "code": -32602, 00:14:34.172 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:34.172 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27766 00:14:34.172 [2024-07-15 07:41:25.370486] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27766: invalid model number 'SPDK_Controller' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:34.172 { 00:14:34.172 "nqn": "nqn.2016-06.io.spdk:cnode27766", 00:14:34.172 "model_number": "SPDK_Controller\u001f", 00:14:34.172 "method": "nvmf_create_subsystem", 00:14:34.172 "req_id": 1 00:14:34.172 } 00:14:34.172 Got JSON-RPC error response 00:14:34.172 response: 00:14:34.172 { 00:14:34.172 "code": -32602, 00:14:34.172 "message": "Invalid MN SPDK_Controller\u001f" 00:14:34.172 }' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:34.172 { 00:14:34.172 "nqn": "nqn.2016-06.io.spdk:cnode27766", 00:14:34.172 "model_number": "SPDK_Controller\u001f", 00:14:34.172 "method": "nvmf_create_subsystem", 00:14:34.172 "req_id": 1 00:14:34.172 } 00:14:34.172 Got JSON-RPC error response 00:14:34.172 response: 00:14:34.172 { 00:14:34.172 "code": -32602, 00:14:34.172 "message": "Invalid MN SPDK_Controller\u001f" 00:14:34.172 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.172 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:34.431 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '|5bt5-~`81U{w0s-9G]NP' 00:14:34.432 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '|5bt5-~`81U{w0s-9G]NP' nqn.2016-06.io.spdk:cnode9086 00:14:34.691 [2024-07-15 07:41:25.699634] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9086: invalid serial number '|5bt5-~`81U{w0s-9G]NP' 00:14:34.691 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:34.692 { 00:14:34.692 "nqn": "nqn.2016-06.io.spdk:cnode9086", 00:14:34.692 "serial_number": "|5bt5-~`81U{w0s-9G]NP", 00:14:34.692 "method": "nvmf_create_subsystem", 00:14:34.692 "req_id": 1 00:14:34.692 } 00:14:34.692 Got JSON-RPC error response 00:14:34.692 response: 00:14:34.692 { 00:14:34.692 "code": -32602, 00:14:34.692 "message": "Invalid SN |5bt5-~`81U{w0s-9G]NP" 00:14:34.692 }' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:34.692 { 00:14:34.692 "nqn": "nqn.2016-06.io.spdk:cnode9086", 00:14:34.692 "serial_number": "|5bt5-~`81U{w0s-9G]NP", 00:14:34.692 "method": "nvmf_create_subsystem", 00:14:34.692 "req_id": 1 00:14:34.692 } 00:14:34.692 Got JSON-RPC error response 00:14:34.692 response: 00:14:34.692 { 00:14:34.692 "code": -32602, 00:14:34.692 "message": "Invalid SN |5bt5-~`81U{w0s-9G]NP" 00:14:34.692 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:34.692 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:34.693 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ITRej2plbXVRX+bX*0r!zjaJch"' 00:14:34.694 07:41:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ITRej2plbXVRX+bX*0r!zjaJch"' nqn.2016-06.io.spdk:cnode23274 00:14:34.952 [2024-07-15 07:41:26.052815] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23274: invalid model number 'ITRej2plbXVRX+bX*0r!zjaJch"' 00:14:34.952 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:34.952 { 00:14:34.952 "nqn": "nqn.2016-06.io.spdk:cnode23274", 00:14:34.952 "model_number": "ITRej2plbXVRX+bX*0r!zjaJch\"", 00:14:34.952 "method": "nvmf_create_subsystem", 00:14:34.952 "req_id": 1 00:14:34.952 } 00:14:34.952 Got JSON-RPC error response 00:14:34.952 response: 00:14:34.952 { 00:14:34.952 "code": -32602, 00:14:34.952 "message": "Invalid MN ITRej2plbXVRX+bX*0r!zjaJch\"" 00:14:34.952 }' 00:14:34.952 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:34.952 { 00:14:34.952 "nqn": "nqn.2016-06.io.spdk:cnode23274", 00:14:34.952 "model_number": "ITRej2plbXVRX+bX*0r!zjaJch\"", 00:14:34.952 "method": "nvmf_create_subsystem", 00:14:34.952 "req_id": 1 00:14:34.952 } 00:14:34.952 Got JSON-RPC error response 00:14:34.952 response: 00:14:34.952 { 00:14:34.952 "code": -32602, 00:14:34.952 "message": "Invalid MN ITRej2plbXVRX+bX*0r!zjaJch\"" 00:14:34.952 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:34.952 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:35.209 [2024-07-15 07:41:26.309739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.210 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:35.467 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:35.467 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:35.467 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:35.467 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:35.467 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:35.724 [2024-07-15 07:41:26.808999] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:35.724 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:35.724 { 00:14:35.724 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:35.724 "listen_address": { 00:14:35.724 "trtype": "tcp", 00:14:35.724 "traddr": "", 00:14:35.724 "trsvcid": "4421" 00:14:35.724 }, 00:14:35.724 "method": "nvmf_subsystem_remove_listener", 00:14:35.724 "req_id": 1 00:14:35.724 } 00:14:35.724 Got JSON-RPC error response 00:14:35.724 response: 00:14:35.724 { 00:14:35.724 "code": -32602, 00:14:35.724 "message": "Invalid parameters" 00:14:35.724 }' 00:14:35.724 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:35.724 { 00:14:35.724 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:35.724 "listen_address": { 00:14:35.724 "trtype": "tcp", 00:14:35.724 "traddr": "", 00:14:35.724 "trsvcid": "4421" 00:14:35.724 }, 00:14:35.724 "method": "nvmf_subsystem_remove_listener", 00:14:35.724 "req_id": 1 00:14:35.724 } 00:14:35.724 Got JSON-RPC error response 00:14:35.724 response: 00:14:35.724 { 00:14:35.724 "code": -32602, 00:14:35.724 "message": "Invalid parameters" 00:14:35.724 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:35.724 07:41:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29281 -i 0 00:14:35.986 [2024-07-15 07:41:27.049771] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29281: invalid cntlid range [0-65519] 00:14:35.986 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:35.986 { 00:14:35.986 "nqn": "nqn.2016-06.io.spdk:cnode29281", 00:14:35.986 "min_cntlid": 0, 00:14:35.986 "method": "nvmf_create_subsystem", 00:14:35.986 "req_id": 1 00:14:35.986 } 00:14:35.986 Got JSON-RPC error response 00:14:35.986 response: 00:14:35.986 { 00:14:35.986 "code": -32602, 00:14:35.986 "message": "Invalid cntlid range [0-65519]" 00:14:35.986 }' 00:14:35.986 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:35.986 { 00:14:35.986 "nqn": "nqn.2016-06.io.spdk:cnode29281", 00:14:35.986 "min_cntlid": 0, 00:14:35.986 "method": "nvmf_create_subsystem", 00:14:35.986 "req_id": 1 00:14:35.986 } 00:14:35.986 Got JSON-RPC error response 00:14:35.986 response: 00:14:35.986 { 00:14:35.986 "code": -32602, 00:14:35.986 "message": "Invalid cntlid range [0-65519]" 00:14:35.986 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:35.986 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24678 -i 65520 00:14:36.244 [2024-07-15 07:41:27.306672] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24678: invalid cntlid range [65520-65519] 00:14:36.244 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:36.244 { 00:14:36.244 "nqn": "nqn.2016-06.io.spdk:cnode24678", 00:14:36.244 "min_cntlid": 65520, 00:14:36.244 "method": "nvmf_create_subsystem", 00:14:36.244 "req_id": 1 00:14:36.244 } 00:14:36.244 Got JSON-RPC error response 00:14:36.244 response: 00:14:36.244 { 00:14:36.244 "code": -32602, 00:14:36.244 "message": "Invalid cntlid range [65520-65519]" 00:14:36.244 }' 00:14:36.244 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:36.244 { 00:14:36.244 "nqn": "nqn.2016-06.io.spdk:cnode24678", 00:14:36.244 "min_cntlid": 65520, 00:14:36.244 "method": "nvmf_create_subsystem", 00:14:36.244 "req_id": 1 00:14:36.244 } 00:14:36.244 Got JSON-RPC error response 00:14:36.244 response: 00:14:36.244 { 00:14:36.244 "code": -32602, 00:14:36.244 "message": "Invalid cntlid range [65520-65519]" 00:14:36.244 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:36.244 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24278 -I 0 00:14:36.501 [2024-07-15 07:41:27.547511] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24278: invalid cntlid range [1-0] 00:14:36.502 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:36.502 { 00:14:36.502 "nqn": "nqn.2016-06.io.spdk:cnode24278", 00:14:36.502 "max_cntlid": 0, 00:14:36.502 "method": "nvmf_create_subsystem", 00:14:36.502 "req_id": 1 00:14:36.502 } 00:14:36.502 Got JSON-RPC error response 00:14:36.502 response: 00:14:36.502 { 00:14:36.502 "code": -32602, 00:14:36.502 "message": "Invalid cntlid range [1-0]" 00:14:36.502 }' 00:14:36.502 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:36.502 { 00:14:36.502 "nqn": "nqn.2016-06.io.spdk:cnode24278", 00:14:36.502 "max_cntlid": 0, 00:14:36.502 "method": "nvmf_create_subsystem", 00:14:36.502 "req_id": 1 00:14:36.502 } 00:14:36.502 Got JSON-RPC error response 00:14:36.502 response: 00:14:36.502 { 00:14:36.502 "code": -32602, 00:14:36.502 "message": "Invalid cntlid range [1-0]" 00:14:36.502 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:36.502 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode954 -I 65520 00:14:36.760 [2024-07-15 07:41:27.800464] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode954: invalid cntlid range [1-65520] 00:14:36.760 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:36.760 { 00:14:36.760 "nqn": "nqn.2016-06.io.spdk:cnode954", 00:14:36.760 "max_cntlid": 65520, 00:14:36.760 "method": "nvmf_create_subsystem", 00:14:36.760 "req_id": 1 00:14:36.760 } 00:14:36.760 Got JSON-RPC error response 00:14:36.760 response: 00:14:36.760 { 00:14:36.760 "code": -32602, 00:14:36.760 "message": "Invalid cntlid range [1-65520]" 00:14:36.760 }' 00:14:36.760 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:36.760 { 00:14:36.760 "nqn": "nqn.2016-06.io.spdk:cnode954", 00:14:36.760 "max_cntlid": 65520, 00:14:36.760 "method": "nvmf_create_subsystem", 00:14:36.760 "req_id": 1 00:14:36.760 } 00:14:36.760 Got JSON-RPC error response 00:14:36.760 response: 00:14:36.760 { 00:14:36.760 "code": -32602, 00:14:36.760 "message": "Invalid cntlid range [1-65520]" 00:14:36.760 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:36.760 07:41:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25763 -i 6 -I 5 00:14:37.019 [2024-07-15 07:41:28.049321] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25763: invalid cntlid range [6-5] 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:37.019 { 00:14:37.019 "nqn": "nqn.2016-06.io.spdk:cnode25763", 00:14:37.019 "min_cntlid": 6, 00:14:37.019 "max_cntlid": 5, 00:14:37.019 "method": "nvmf_create_subsystem", 00:14:37.019 "req_id": 1 00:14:37.019 } 00:14:37.019 Got JSON-RPC error response 00:14:37.019 response: 00:14:37.019 { 00:14:37.019 "code": -32602, 00:14:37.019 "message": "Invalid cntlid range [6-5]" 00:14:37.019 }' 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:37.019 { 00:14:37.019 "nqn": "nqn.2016-06.io.spdk:cnode25763", 00:14:37.019 "min_cntlid": 6, 00:14:37.019 "max_cntlid": 5, 00:14:37.019 "method": "nvmf_create_subsystem", 00:14:37.019 "req_id": 1 00:14:37.019 } 00:14:37.019 Got JSON-RPC error response 00:14:37.019 response: 00:14:37.019 { 00:14:37.019 "code": -32602, 00:14:37.019 "message": "Invalid cntlid range [6-5]" 00:14:37.019 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:37.019 { 00:14:37.019 "name": "foobar", 00:14:37.019 "method": "nvmf_delete_target", 00:14:37.019 "req_id": 1 00:14:37.019 } 00:14:37.019 Got JSON-RPC error response 00:14:37.019 response: 00:14:37.019 { 00:14:37.019 "code": -32602, 00:14:37.019 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:37.019 }' 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:37.019 { 00:14:37.019 "name": "foobar", 00:14:37.019 "method": "nvmf_delete_target", 00:14:37.019 "req_id": 1 00:14:37.019 } 00:14:37.019 Got JSON-RPC error response 00:14:37.019 response: 00:14:37.019 { 00:14:37.019 "code": -32602, 00:14:37.019 "message": "The specified target doesn't exist, cannot delete it." 00:14:37.019 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.019 rmmod nvme_tcp 00:14:37.019 rmmod nvme_fabrics 00:14:37.019 rmmod nvme_keyring 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1019577 ']' 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1019577 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1019577 ']' 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1019577 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.019 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019577 00:14:37.279 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:37.279 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:37.279 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019577' 00:14:37.279 killing process with pid 1019577 00:14:37.279 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1019577 00:14:37.279 07:41:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1019577 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.659 07:41:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.569 07:41:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:40.569 00:14:40.569 real 0m10.174s 00:14:40.569 user 0m24.217s 00:14:40.569 sys 0m2.538s 00:14:40.569 07:41:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.569 07:41:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:40.569 ************************************ 00:14:40.569 END TEST nvmf_invalid 00:14:40.569 ************************************ 00:14:40.569 07:41:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:40.569 07:41:31 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:40.569 07:41:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:40.569 07:41:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.569 07:41:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:40.569 ************************************ 00:14:40.569 START TEST nvmf_abort 00:14:40.569 ************************************ 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:40.569 * Looking for test storage... 00:14:40.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.569 07:41:31 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:40.570 07:41:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:42.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:42.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:42.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:42.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:14:42.473 00:14:42.473 --- 10.0.0.2 ping statistics --- 00:14:42.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.473 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:14:42.473 00:14:42.473 --- 10.0.0.1 ping statistics --- 00:14:42.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.473 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1022344 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1022344 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1022344 ']' 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.473 07:41:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:42.732 [2024-07-15 07:41:33.771934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:42.732 [2024-07-15 07:41:33.772089] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.732 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.732 [2024-07-15 07:41:33.916365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.992 [2024-07-15 07:41:34.182136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.992 [2024-07-15 07:41:34.182227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.992 [2024-07-15 07:41:34.182263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.992 [2024-07-15 07:41:34.182297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.992 [2024-07-15 07:41:34.182319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.992 [2024-07-15 07:41:34.182465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.992 [2024-07-15 07:41:34.182512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.992 [2024-07-15 07:41:34.182522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.557 [2024-07-15 07:41:34.730068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.557 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.815 Malloc0 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.815 Delay0 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.815 [2024-07-15 07:41:34.855873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:43.815 07:41:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.816 07:41:34 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:43.816 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.816 [2024-07-15 07:41:35.003069] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:46.353 Initializing NVMe Controllers 00:14:46.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:46.353 controller IO queue size 128 less than required 00:14:46.353 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:46.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:46.353 Initialization complete. Launching workers. 00:14:46.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25336 00:14:46.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25393, failed to submit 66 00:14:46.353 success 25336, unsuccess 57, failed 0 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.353 rmmod nvme_tcp 00:14:46.353 rmmod nvme_fabrics 00:14:46.353 rmmod nvme_keyring 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1022344 ']' 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1022344 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1022344 ']' 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1022344 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1022344 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1022344' 00:14:46.353 killing process with pid 1022344 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1022344 00:14:46.353 07:41:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1022344 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.769 07:41:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.679 07:41:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.679 00:14:49.679 real 0m9.040s 00:14:49.679 user 0m14.758s 00:14:49.679 sys 0m2.639s 00:14:49.679 07:41:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.679 07:41:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:49.679 ************************************ 00:14:49.679 END TEST nvmf_abort 00:14:49.679 ************************************ 00:14:49.679 07:41:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:49.679 07:41:40 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:49.679 07:41:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.679 07:41:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.679 07:41:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.679 ************************************ 00:14:49.679 START TEST nvmf_ns_hotplug_stress 00:14:49.679 ************************************ 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:49.679 * Looking for test storage... 00:14:49.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.679 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.680 07:41:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:51.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:51.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:51.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:51.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.584 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:14:51.844 00:14:51.844 --- 10.0.0.2 ping statistics --- 00:14:51.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.844 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:14:51.844 00:14:51.844 --- 10.0.0.1 ping statistics --- 00:14:51.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.844 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1024828 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1024828 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1024828 ']' 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.844 07:41:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.844 [2024-07-15 07:41:42.973458] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:51.844 [2024-07-15 07:41:42.973611] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.844 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.104 [2024-07-15 07:41:43.128954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.363 [2024-07-15 07:41:43.390763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.363 [2024-07-15 07:41:43.390849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.363 [2024-07-15 07:41:43.390909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.363 [2024-07-15 07:41:43.390944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.363 [2024-07-15 07:41:43.390982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.363 [2024-07-15 07:41:43.391141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.363 [2024-07-15 07:41:43.391211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.363 [2024-07-15 07:41:43.391213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:52.928 07:41:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:53.186 [2024-07-15 07:41:44.170388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.186 07:41:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:53.444 07:41:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.702 [2024-07-15 07:41:44.745358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.702 07:41:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.959 07:41:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:54.216 Malloc0 00:14:54.216 07:41:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:54.474 Delay0 00:14:54.474 07:41:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.733 07:41:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:54.991 NULL1 00:14:54.991 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:55.249 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1025256 00:14:55.249 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:55.249 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:14:55.249 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.249 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.506 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.763 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:55.763 07:41:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:56.020 true 00:14:56.020 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:14:56.020 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.277 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.534 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:56.534 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:56.794 true 00:14:56.794 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:14:56.794 07:41:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.727 Read completed with error (sct=0, sc=11) 00:14:57.727 07:41:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.986 07:41:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:57.986 07:41:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:57.986 true 00:14:57.986 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:14:57.986 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.244 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.502 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:58.502 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:58.759 true 00:14:58.759 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:14:58.759 07:41:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.697 07:41:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:59.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:59.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:59.955 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:59.955 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:00.213 true 00:15:00.213 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:00.213 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.472 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.729 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:00.729 07:41:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:00.987 true 00:15:00.987 07:41:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:00.987 07:41:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.923 07:41:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:02.180 07:41:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:02.180 07:41:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:02.438 true 00:15:02.438 07:41:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:02.438 07:41:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.695 07:41:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.960 07:41:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:02.960 07:41:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:03.279 true 00:15:03.279 07:41:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:03.279 07:41:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.217 07:41:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.475 07:41:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:04.475 07:41:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:04.475 true 00:15:04.733 07:41:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:04.733 07:41:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.733 07:41:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.991 07:41:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:04.991 07:41:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:05.249 true 00:15:05.249 07:41:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:05.249 07:41:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.185 07:41:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.443 07:41:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:06.443 07:41:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:06.700 true 00:15:06.700 07:41:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:06.700 07:41:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.958 07:41:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.216 07:41:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:07.216 07:41:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:07.476 true 00:15:07.476 07:41:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:07.476 07:41:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.413 07:41:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.413 07:41:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:08.413 07:41:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:08.670 true 00:15:08.670 07:41:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:08.670 07:41:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.928 07:42:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.185 07:42:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:09.185 07:42:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:09.442 true 00:15:09.442 07:42:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:09.442 07:42:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.375 07:42:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.632 07:42:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:10.632 07:42:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:10.890 true 00:15:10.890 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:10.890 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.147 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.404 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:11.404 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:11.662 true 00:15:11.662 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:11.662 07:42:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.918 07:42:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.175 07:42:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:12.175 07:42:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:12.434 true 00:15:12.434 07:42:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:12.434 07:42:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.809 07:42:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:13.809 07:42:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:13.809 07:42:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:14.067 true 00:15:14.067 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:14.067 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.325 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.583 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:14.583 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:14.841 true 00:15:14.841 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:14.841 07:42:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.099 07:42:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.357 07:42:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:15.357 07:42:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:15.616 true 00:15:15.616 07:42:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:15.616 07:42:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.639 07:42:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:16.897 07:42:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:16.897 07:42:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:17.156 true 00:15:17.156 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:17.156 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.414 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.673 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:17.673 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:17.931 true 00:15:17.931 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:17.931 07:42:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.896 07:42:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.896 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:18.896 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:19.154 true 00:15:19.154 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:19.154 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.412 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.670 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:19.670 07:42:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:19.929 true 00:15:19.929 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:19.929 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.187 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.444 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:20.444 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:20.702 true 00:15:20.702 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:20.702 07:42:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.638 07:42:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.897 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:21.897 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:22.155 true 00:15:22.155 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:22.155 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.413 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.671 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:22.671 07:42:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:22.929 true 00:15:22.929 07:42:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:22.929 07:42:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.868 07:42:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.126 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:24.126 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:24.385 true 00:15:24.385 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:24.385 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.643 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.901 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:24.901 07:42:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:25.159 true 00:15:25.159 07:42:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:25.159 07:42:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.095 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.095 Initializing NVMe Controllers 00:15:26.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.095 Controller IO queue size 128, less than required. 00:15:26.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.095 Controller IO queue size 128, less than required. 00:15:26.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:26.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:26.095 Initialization complete. Launching workers. 00:15:26.095 ======================================================== 00:15:26.095 Latency(us) 00:15:26.095 Device Information : IOPS MiB/s Average min max 00:15:26.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 398.26 0.19 153473.65 3959.65 1016877.75 00:15:26.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7727.06 3.77 16513.80 2058.30 490108.49 00:15:26.095 ======================================================== 00:15:26.095 Total : 8125.32 3.97 23226.92 2058.30 1016877.75 00:15:26.095 00:15:26.095 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:26.095 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:26.353 true 00:15:26.353 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1025256 00:15:26.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1025256) - No such process 00:15:26.353 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1025256 00:15:26.353 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.612 07:42:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:26.870 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:26.870 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:26.870 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:26.870 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.870 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:27.127 null0 00:15:27.127 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.127 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.127 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:27.384 null1 00:15:27.384 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.384 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.384 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:27.642 null2 00:15:27.642 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.642 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.642 07:42:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:27.898 null3 00:15:27.898 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.898 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.898 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:28.154 null4 00:15:28.154 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:28.154 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:28.154 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:28.411 null5 00:15:28.411 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:28.411 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:28.411 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:28.669 null6 00:15:28.669 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:28.669 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:28.669 07:42:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:28.927 null7 00:15:28.927 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:28.927 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:28.927 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:28.927 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.927 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.927 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1030033 1030034 1030035 1030038 1030040 1030042 1030044 1030046 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.928 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.185 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:29.185 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:29.185 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.185 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:29.185 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.186 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:29.186 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:29.186 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.471 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:29.727 07:42:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.983 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:30.240 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.496 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.753 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:31.011 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.011 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:31.011 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:31.011 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.011 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:31.011 07:42:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.269 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:31.527 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.785 07:42:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.043 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.300 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.558 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:32.815 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.816 07:42:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:33.074 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.332 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.590 07:42:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.860 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:34.118 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.376 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.376 rmmod nvme_tcp 00:15:34.636 rmmod nvme_fabrics 00:15:34.636 rmmod nvme_keyring 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1024828 ']' 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1024828 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1024828 ']' 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1024828 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024828 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024828' 00:15:34.636 killing process with pid 1024828 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1024828 00:15:34.636 07:42:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1024828 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.019 07:42:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.927 07:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.927 00:15:37.927 real 0m48.361s 00:15:37.927 user 3m36.424s 00:15:37.927 sys 0m16.193s 00:15:37.927 07:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.927 07:42:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.927 ************************************ 00:15:37.927 END TEST nvmf_ns_hotplug_stress 00:15:37.927 ************************************ 00:15:37.927 07:42:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:37.927 07:42:29 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:37.927 07:42:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:37.927 07:42:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.927 07:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.927 ************************************ 00:15:37.927 START TEST nvmf_connect_stress 00:15:37.927 ************************************ 00:15:37.927 07:42:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:38.187 * Looking for test storage... 00:15:38.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.187 07:42:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.188 07:42:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.092 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.092 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:40.093 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:40.093 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:40.093 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:40.093 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:15:40.093 00:15:40.093 --- 10.0.0.2 ping statistics --- 00:15:40.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.093 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:15:40.093 00:15:40.093 --- 10.0.0.1 ping statistics --- 00:15:40.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.093 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1032919 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1032919 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1032919 ']' 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.093 07:42:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.353 [2024-07-15 07:42:31.354417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:40.353 [2024-07-15 07:42:31.354559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.353 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.353 [2024-07-15 07:42:31.488374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.614 [2024-07-15 07:42:31.717045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.614 [2024-07-15 07:42:31.717113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.614 [2024-07-15 07:42:31.717142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.614 [2024-07-15 07:42:31.717175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.614 [2024-07-15 07:42:31.717193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.614 [2024-07-15 07:42:31.717316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.614 [2024-07-15 07:42:31.717351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.614 [2024-07-15 07:42:31.717362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.182 [2024-07-15 07:42:32.311473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.182 [2024-07-15 07:42:32.346496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:41.182 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.183 NULL1 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1033075 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.183 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.443 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.703 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.703 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:41.703 07:42:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.703 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.703 07:42:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.962 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.962 07:42:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:41.962 07:42:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.962 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.962 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.220 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.220 07:42:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:42.220 07:42:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.220 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.220 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.478 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.478 07:42:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:42.478 07:42:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.478 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.478 07:42:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.046 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.046 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:43.046 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.046 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.046 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.306 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.306 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:43.306 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.306 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.306 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.593 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.593 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:43.593 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.593 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.593 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.851 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.851 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:43.851 07:42:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.851 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.851 07:42:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.110 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.111 07:42:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:44.111 07:42:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.111 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.111 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.679 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.679 07:42:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:44.679 07:42:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.679 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.679 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.938 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.938 07:42:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:44.938 07:42:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.938 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.938 07:42:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.196 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.196 07:42:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:45.196 07:42:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.196 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.196 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.454 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.454 07:42:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:45.454 07:42:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.454 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.454 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.713 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.713 07:42:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:45.713 07:42:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.713 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.713 07:42:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.283 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.283 07:42:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:46.283 07:42:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.283 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.283 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.541 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.541 07:42:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:46.541 07:42:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.541 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.541 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.801 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.801 07:42:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:46.801 07:42:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.801 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.801 07:42:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.060 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.060 07:42:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:47.060 07:42:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.060 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.060 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.630 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.630 07:42:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:47.630 07:42:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.630 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.630 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.888 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.888 07:42:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:47.888 07:42:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.888 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.888 07:42:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.145 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.145 07:42:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:48.145 07:42:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.145 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.145 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.403 07:42:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:48.403 07:42:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.403 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.403 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.663 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.663 07:42:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:48.663 07:42:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.663 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.663 07:42:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.231 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.231 07:42:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:49.231 07:42:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.231 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.231 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.494 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.494 07:42:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:49.494 07:42:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.494 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.494 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.752 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.752 07:42:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:49.752 07:42:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.752 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.752 07:42:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.011 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.011 07:42:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:50.011 07:42:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.011 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.011 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.270 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.270 07:42:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:50.270 07:42:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.270 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.270 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.838 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.838 07:42:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:50.838 07:42:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.838 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.838 07:42:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.096 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.096 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:51.096 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.096 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.096 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.356 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.356 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:51.356 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.356 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.356 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.356 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1033075 00:15:51.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1033075) - No such process 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1033075 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.615 rmmod nvme_tcp 00:15:51.615 rmmod nvme_fabrics 00:15:51.615 rmmod nvme_keyring 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1032919 ']' 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1032919 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1032919 ']' 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1032919 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.615 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032919 00:15:51.874 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:51.874 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:51.874 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032919' 00:15:51.874 killing process with pid 1032919 00:15:51.874 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1032919 00:15:51.874 07:42:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1032919 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.250 07:42:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.153 07:42:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:55.153 00:15:55.153 real 0m17.028s 00:15:55.153 user 0m42.523s 00:15:55.153 sys 0m5.795s 00:15:55.153 07:42:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.153 07:42:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.153 ************************************ 00:15:55.153 END TEST nvmf_connect_stress 00:15:55.153 ************************************ 00:15:55.153 07:42:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:55.153 07:42:46 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:55.153 07:42:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:55.153 07:42:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.153 07:42:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.153 ************************************ 00:15:55.153 START TEST nvmf_fused_ordering 00:15:55.153 ************************************ 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:55.153 * Looking for test storage... 00:15:55.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:55.153 07:42:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:57.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:57.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:57.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.054 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:57.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:15:57.055 00:15:57.055 --- 10.0.0.2 ping statistics --- 00:15:57.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.055 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:15:57.055 00:15:57.055 --- 10.0.0.1 ping statistics --- 00:15:57.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.055 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1036350 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1036350 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1036350 ']' 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.055 07:42:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.314 [2024-07-15 07:42:48.350467] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:57.314 [2024-07-15 07:42:48.350611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.314 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.314 [2024-07-15 07:42:48.489150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.573 [2024-07-15 07:42:48.746465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.573 [2024-07-15 07:42:48.746552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.573 [2024-07-15 07:42:48.746595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.573 [2024-07-15 07:42:48.746634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.573 [2024-07-15 07:42:48.746669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.573 [2024-07-15 07:42:48.746745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 [2024-07-15 07:42:49.283141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 [2024-07-15 07:42:49.299328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 NULL1 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.170 07:42:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:58.170 [2024-07-15 07:42:49.370643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:58.170 [2024-07-15 07:42:49.370739] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036498 ] 00:15:58.428 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.993 Attached to nqn.2016-06.io.spdk:cnode1 00:15:58.993 Namespace ID: 1 size: 1GB 00:15:58.993 fused_ordering(0) 00:15:58.993 fused_ordering(1) 00:15:58.993 fused_ordering(2) 00:15:58.993 fused_ordering(3) 00:15:58.993 fused_ordering(4) 00:15:58.993 fused_ordering(5) 00:15:58.993 fused_ordering(6) 00:15:58.993 fused_ordering(7) 00:15:58.993 fused_ordering(8) 00:15:58.993 fused_ordering(9) 00:15:58.993 fused_ordering(10) 00:15:58.993 fused_ordering(11) 00:15:58.993 fused_ordering(12) 00:15:58.993 fused_ordering(13) 00:15:58.993 fused_ordering(14) 00:15:58.993 fused_ordering(15) 00:15:58.993 fused_ordering(16) 00:15:58.993 fused_ordering(17) 00:15:58.993 fused_ordering(18) 00:15:58.993 fused_ordering(19) 00:15:58.993 fused_ordering(20) 00:15:58.993 fused_ordering(21) 00:15:58.993 fused_ordering(22) 00:15:58.993 fused_ordering(23) 00:15:58.993 fused_ordering(24) 00:15:58.993 fused_ordering(25) 00:15:58.993 fused_ordering(26) 00:15:58.993 fused_ordering(27) 00:15:58.993 fused_ordering(28) 00:15:58.993 fused_ordering(29) 00:15:58.993 fused_ordering(30) 00:15:58.993 fused_ordering(31) 00:15:58.993 fused_ordering(32) 00:15:58.993 fused_ordering(33) 00:15:58.993 fused_ordering(34) 00:15:58.993 fused_ordering(35) 00:15:58.993 fused_ordering(36) 00:15:58.993 fused_ordering(37) 00:15:58.993 fused_ordering(38) 00:15:58.993 fused_ordering(39) 00:15:58.993 fused_ordering(40) 00:15:58.993 fused_ordering(41) 00:15:58.993 fused_ordering(42) 00:15:58.993 fused_ordering(43) 00:15:58.993 fused_ordering(44) 00:15:58.993 fused_ordering(45) 00:15:58.993 fused_ordering(46) 00:15:58.993 fused_ordering(47) 00:15:58.993 fused_ordering(48) 00:15:58.993 fused_ordering(49) 00:15:58.993 fused_ordering(50) 00:15:58.993 fused_ordering(51) 00:15:58.993 fused_ordering(52) 00:15:58.993 fused_ordering(53) 00:15:58.993 fused_ordering(54) 00:15:58.993 fused_ordering(55) 00:15:58.993 fused_ordering(56) 00:15:58.993 fused_ordering(57) 00:15:58.993 fused_ordering(58) 00:15:58.993 fused_ordering(59) 00:15:58.993 fused_ordering(60) 00:15:58.993 fused_ordering(61) 00:15:58.993 fused_ordering(62) 00:15:58.993 fused_ordering(63) 00:15:58.993 fused_ordering(64) 00:15:58.993 fused_ordering(65) 00:15:58.993 fused_ordering(66) 00:15:58.993 fused_ordering(67) 00:15:58.993 fused_ordering(68) 00:15:58.993 fused_ordering(69) 00:15:58.993 fused_ordering(70) 00:15:58.993 fused_ordering(71) 00:15:58.993 fused_ordering(72) 00:15:58.993 fused_ordering(73) 00:15:58.993 fused_ordering(74) 00:15:58.993 fused_ordering(75) 00:15:58.993 fused_ordering(76) 00:15:58.993 fused_ordering(77) 00:15:58.993 fused_ordering(78) 00:15:58.993 fused_ordering(79) 00:15:58.993 fused_ordering(80) 00:15:58.993 fused_ordering(81) 00:15:58.993 fused_ordering(82) 00:15:58.993 fused_ordering(83) 00:15:58.993 fused_ordering(84) 00:15:58.993 fused_ordering(85) 00:15:58.993 fused_ordering(86) 00:15:58.993 fused_ordering(87) 00:15:58.993 fused_ordering(88) 00:15:58.993 fused_ordering(89) 00:15:58.993 fused_ordering(90) 00:15:58.993 fused_ordering(91) 00:15:58.993 fused_ordering(92) 00:15:58.993 fused_ordering(93) 00:15:58.993 fused_ordering(94) 00:15:58.993 fused_ordering(95) 00:15:58.993 fused_ordering(96) 00:15:58.993 fused_ordering(97) 00:15:58.993 fused_ordering(98) 00:15:58.993 fused_ordering(99) 00:15:58.993 fused_ordering(100) 00:15:58.993 fused_ordering(101) 00:15:58.993 fused_ordering(102) 00:15:58.993 fused_ordering(103) 00:15:58.993 fused_ordering(104) 00:15:58.993 fused_ordering(105) 00:15:58.993 fused_ordering(106) 00:15:58.993 fused_ordering(107) 00:15:58.993 fused_ordering(108) 00:15:58.993 fused_ordering(109) 00:15:58.993 fused_ordering(110) 00:15:58.993 fused_ordering(111) 00:15:58.993 fused_ordering(112) 00:15:58.993 fused_ordering(113) 00:15:58.993 fused_ordering(114) 00:15:58.993 fused_ordering(115) 00:15:58.993 fused_ordering(116) 00:15:58.993 fused_ordering(117) 00:15:58.993 fused_ordering(118) 00:15:58.993 fused_ordering(119) 00:15:58.993 fused_ordering(120) 00:15:58.993 fused_ordering(121) 00:15:58.993 fused_ordering(122) 00:15:58.993 fused_ordering(123) 00:15:58.993 fused_ordering(124) 00:15:58.993 fused_ordering(125) 00:15:58.993 fused_ordering(126) 00:15:58.993 fused_ordering(127) 00:15:58.993 fused_ordering(128) 00:15:58.993 fused_ordering(129) 00:15:58.993 fused_ordering(130) 00:15:58.993 fused_ordering(131) 00:15:58.993 fused_ordering(132) 00:15:58.993 fused_ordering(133) 00:15:58.993 fused_ordering(134) 00:15:58.993 fused_ordering(135) 00:15:58.993 fused_ordering(136) 00:15:58.993 fused_ordering(137) 00:15:58.993 fused_ordering(138) 00:15:58.993 fused_ordering(139) 00:15:58.993 fused_ordering(140) 00:15:58.993 fused_ordering(141) 00:15:58.993 fused_ordering(142) 00:15:58.993 fused_ordering(143) 00:15:58.993 fused_ordering(144) 00:15:58.993 fused_ordering(145) 00:15:58.993 fused_ordering(146) 00:15:58.993 fused_ordering(147) 00:15:58.993 fused_ordering(148) 00:15:58.993 fused_ordering(149) 00:15:58.993 fused_ordering(150) 00:15:58.993 fused_ordering(151) 00:15:58.993 fused_ordering(152) 00:15:58.993 fused_ordering(153) 00:15:58.993 fused_ordering(154) 00:15:58.993 fused_ordering(155) 00:15:58.993 fused_ordering(156) 00:15:58.993 fused_ordering(157) 00:15:58.993 fused_ordering(158) 00:15:58.993 fused_ordering(159) 00:15:58.993 fused_ordering(160) 00:15:58.993 fused_ordering(161) 00:15:58.993 fused_ordering(162) 00:15:58.993 fused_ordering(163) 00:15:58.993 fused_ordering(164) 00:15:58.993 fused_ordering(165) 00:15:58.993 fused_ordering(166) 00:15:58.993 fused_ordering(167) 00:15:58.993 fused_ordering(168) 00:15:58.993 fused_ordering(169) 00:15:58.993 fused_ordering(170) 00:15:58.993 fused_ordering(171) 00:15:58.993 fused_ordering(172) 00:15:58.993 fused_ordering(173) 00:15:58.993 fused_ordering(174) 00:15:58.993 fused_ordering(175) 00:15:58.993 fused_ordering(176) 00:15:58.993 fused_ordering(177) 00:15:58.993 fused_ordering(178) 00:15:58.993 fused_ordering(179) 00:15:58.993 fused_ordering(180) 00:15:58.993 fused_ordering(181) 00:15:58.993 fused_ordering(182) 00:15:58.993 fused_ordering(183) 00:15:58.993 fused_ordering(184) 00:15:58.993 fused_ordering(185) 00:15:58.993 fused_ordering(186) 00:15:58.993 fused_ordering(187) 00:15:58.993 fused_ordering(188) 00:15:58.993 fused_ordering(189) 00:15:58.993 fused_ordering(190) 00:15:58.993 fused_ordering(191) 00:15:58.993 fused_ordering(192) 00:15:58.994 fused_ordering(193) 00:15:58.994 fused_ordering(194) 00:15:58.994 fused_ordering(195) 00:15:58.994 fused_ordering(196) 00:15:58.994 fused_ordering(197) 00:15:58.994 fused_ordering(198) 00:15:58.994 fused_ordering(199) 00:15:58.994 fused_ordering(200) 00:15:58.994 fused_ordering(201) 00:15:58.994 fused_ordering(202) 00:15:58.994 fused_ordering(203) 00:15:58.994 fused_ordering(204) 00:15:58.994 fused_ordering(205) 00:15:59.559 fused_ordering(206) 00:15:59.559 fused_ordering(207) 00:15:59.559 fused_ordering(208) 00:15:59.559 fused_ordering(209) 00:15:59.559 fused_ordering(210) 00:15:59.559 fused_ordering(211) 00:15:59.559 fused_ordering(212) 00:15:59.559 fused_ordering(213) 00:15:59.559 fused_ordering(214) 00:15:59.559 fused_ordering(215) 00:15:59.559 fused_ordering(216) 00:15:59.559 fused_ordering(217) 00:15:59.559 fused_ordering(218) 00:15:59.559 fused_ordering(219) 00:15:59.559 fused_ordering(220) 00:15:59.559 fused_ordering(221) 00:15:59.559 fused_ordering(222) 00:15:59.559 fused_ordering(223) 00:15:59.559 fused_ordering(224) 00:15:59.559 fused_ordering(225) 00:15:59.559 fused_ordering(226) 00:15:59.559 fused_ordering(227) 00:15:59.559 fused_ordering(228) 00:15:59.559 fused_ordering(229) 00:15:59.559 fused_ordering(230) 00:15:59.559 fused_ordering(231) 00:15:59.559 fused_ordering(232) 00:15:59.559 fused_ordering(233) 00:15:59.559 fused_ordering(234) 00:15:59.559 fused_ordering(235) 00:15:59.559 fused_ordering(236) 00:15:59.559 fused_ordering(237) 00:15:59.559 fused_ordering(238) 00:15:59.559 fused_ordering(239) 00:15:59.559 fused_ordering(240) 00:15:59.559 fused_ordering(241) 00:15:59.559 fused_ordering(242) 00:15:59.559 fused_ordering(243) 00:15:59.559 fused_ordering(244) 00:15:59.559 fused_ordering(245) 00:15:59.559 fused_ordering(246) 00:15:59.559 fused_ordering(247) 00:15:59.559 fused_ordering(248) 00:15:59.559 fused_ordering(249) 00:15:59.559 fused_ordering(250) 00:15:59.559 fused_ordering(251) 00:15:59.559 fused_ordering(252) 00:15:59.559 fused_ordering(253) 00:15:59.559 fused_ordering(254) 00:15:59.559 fused_ordering(255) 00:15:59.559 fused_ordering(256) 00:15:59.559 fused_ordering(257) 00:15:59.559 fused_ordering(258) 00:15:59.559 fused_ordering(259) 00:15:59.559 fused_ordering(260) 00:15:59.559 fused_ordering(261) 00:15:59.559 fused_ordering(262) 00:15:59.559 fused_ordering(263) 00:15:59.559 fused_ordering(264) 00:15:59.559 fused_ordering(265) 00:15:59.559 fused_ordering(266) 00:15:59.559 fused_ordering(267) 00:15:59.559 fused_ordering(268) 00:15:59.559 fused_ordering(269) 00:15:59.559 fused_ordering(270) 00:15:59.559 fused_ordering(271) 00:15:59.559 fused_ordering(272) 00:15:59.559 fused_ordering(273) 00:15:59.559 fused_ordering(274) 00:15:59.559 fused_ordering(275) 00:15:59.559 fused_ordering(276) 00:15:59.559 fused_ordering(277) 00:15:59.559 fused_ordering(278) 00:15:59.559 fused_ordering(279) 00:15:59.559 fused_ordering(280) 00:15:59.559 fused_ordering(281) 00:15:59.559 fused_ordering(282) 00:15:59.559 fused_ordering(283) 00:15:59.559 fused_ordering(284) 00:15:59.559 fused_ordering(285) 00:15:59.559 fused_ordering(286) 00:15:59.559 fused_ordering(287) 00:15:59.559 fused_ordering(288) 00:15:59.559 fused_ordering(289) 00:15:59.559 fused_ordering(290) 00:15:59.559 fused_ordering(291) 00:15:59.559 fused_ordering(292) 00:15:59.559 fused_ordering(293) 00:15:59.559 fused_ordering(294) 00:15:59.559 fused_ordering(295) 00:15:59.559 fused_ordering(296) 00:15:59.559 fused_ordering(297) 00:15:59.559 fused_ordering(298) 00:15:59.559 fused_ordering(299) 00:15:59.559 fused_ordering(300) 00:15:59.559 fused_ordering(301) 00:15:59.559 fused_ordering(302) 00:15:59.559 fused_ordering(303) 00:15:59.559 fused_ordering(304) 00:15:59.559 fused_ordering(305) 00:15:59.559 fused_ordering(306) 00:15:59.559 fused_ordering(307) 00:15:59.559 fused_ordering(308) 00:15:59.559 fused_ordering(309) 00:15:59.559 fused_ordering(310) 00:15:59.559 fused_ordering(311) 00:15:59.559 fused_ordering(312) 00:15:59.559 fused_ordering(313) 00:15:59.559 fused_ordering(314) 00:15:59.559 fused_ordering(315) 00:15:59.559 fused_ordering(316) 00:15:59.559 fused_ordering(317) 00:15:59.559 fused_ordering(318) 00:15:59.559 fused_ordering(319) 00:15:59.559 fused_ordering(320) 00:15:59.559 fused_ordering(321) 00:15:59.559 fused_ordering(322) 00:15:59.559 fused_ordering(323) 00:15:59.559 fused_ordering(324) 00:15:59.559 fused_ordering(325) 00:15:59.559 fused_ordering(326) 00:15:59.559 fused_ordering(327) 00:15:59.559 fused_ordering(328) 00:15:59.559 fused_ordering(329) 00:15:59.559 fused_ordering(330) 00:15:59.559 fused_ordering(331) 00:15:59.559 fused_ordering(332) 00:15:59.559 fused_ordering(333) 00:15:59.559 fused_ordering(334) 00:15:59.559 fused_ordering(335) 00:15:59.559 fused_ordering(336) 00:15:59.559 fused_ordering(337) 00:15:59.559 fused_ordering(338) 00:15:59.559 fused_ordering(339) 00:15:59.559 fused_ordering(340) 00:15:59.559 fused_ordering(341) 00:15:59.559 fused_ordering(342) 00:15:59.559 fused_ordering(343) 00:15:59.559 fused_ordering(344) 00:15:59.559 fused_ordering(345) 00:15:59.559 fused_ordering(346) 00:15:59.559 fused_ordering(347) 00:15:59.559 fused_ordering(348) 00:15:59.559 fused_ordering(349) 00:15:59.559 fused_ordering(350) 00:15:59.559 fused_ordering(351) 00:15:59.559 fused_ordering(352) 00:15:59.559 fused_ordering(353) 00:15:59.559 fused_ordering(354) 00:15:59.559 fused_ordering(355) 00:15:59.559 fused_ordering(356) 00:15:59.559 fused_ordering(357) 00:15:59.559 fused_ordering(358) 00:15:59.559 fused_ordering(359) 00:15:59.559 fused_ordering(360) 00:15:59.559 fused_ordering(361) 00:15:59.559 fused_ordering(362) 00:15:59.559 fused_ordering(363) 00:15:59.559 fused_ordering(364) 00:15:59.559 fused_ordering(365) 00:15:59.559 fused_ordering(366) 00:15:59.559 fused_ordering(367) 00:15:59.559 fused_ordering(368) 00:15:59.559 fused_ordering(369) 00:15:59.559 fused_ordering(370) 00:15:59.559 fused_ordering(371) 00:15:59.559 fused_ordering(372) 00:15:59.559 fused_ordering(373) 00:15:59.559 fused_ordering(374) 00:15:59.559 fused_ordering(375) 00:15:59.559 fused_ordering(376) 00:15:59.559 fused_ordering(377) 00:15:59.559 fused_ordering(378) 00:15:59.559 fused_ordering(379) 00:15:59.559 fused_ordering(380) 00:15:59.559 fused_ordering(381) 00:15:59.559 fused_ordering(382) 00:15:59.559 fused_ordering(383) 00:15:59.559 fused_ordering(384) 00:15:59.559 fused_ordering(385) 00:15:59.559 fused_ordering(386) 00:15:59.559 fused_ordering(387) 00:15:59.559 fused_ordering(388) 00:15:59.559 fused_ordering(389) 00:15:59.559 fused_ordering(390) 00:15:59.559 fused_ordering(391) 00:15:59.559 fused_ordering(392) 00:15:59.559 fused_ordering(393) 00:15:59.559 fused_ordering(394) 00:15:59.559 fused_ordering(395) 00:15:59.559 fused_ordering(396) 00:15:59.559 fused_ordering(397) 00:15:59.559 fused_ordering(398) 00:15:59.559 fused_ordering(399) 00:15:59.559 fused_ordering(400) 00:15:59.559 fused_ordering(401) 00:15:59.559 fused_ordering(402) 00:15:59.559 fused_ordering(403) 00:15:59.559 fused_ordering(404) 00:15:59.559 fused_ordering(405) 00:15:59.559 fused_ordering(406) 00:15:59.559 fused_ordering(407) 00:15:59.559 fused_ordering(408) 00:15:59.559 fused_ordering(409) 00:15:59.559 fused_ordering(410) 00:16:00.127 fused_ordering(411) 00:16:00.127 fused_ordering(412) 00:16:00.127 fused_ordering(413) 00:16:00.127 fused_ordering(414) 00:16:00.127 fused_ordering(415) 00:16:00.127 fused_ordering(416) 00:16:00.127 fused_ordering(417) 00:16:00.127 fused_ordering(418) 00:16:00.127 fused_ordering(419) 00:16:00.127 fused_ordering(420) 00:16:00.127 fused_ordering(421) 00:16:00.127 fused_ordering(422) 00:16:00.127 fused_ordering(423) 00:16:00.127 fused_ordering(424) 00:16:00.127 fused_ordering(425) 00:16:00.127 fused_ordering(426) 00:16:00.127 fused_ordering(427) 00:16:00.127 fused_ordering(428) 00:16:00.127 fused_ordering(429) 00:16:00.127 fused_ordering(430) 00:16:00.127 fused_ordering(431) 00:16:00.127 fused_ordering(432) 00:16:00.127 fused_ordering(433) 00:16:00.127 fused_ordering(434) 00:16:00.127 fused_ordering(435) 00:16:00.127 fused_ordering(436) 00:16:00.127 fused_ordering(437) 00:16:00.127 fused_ordering(438) 00:16:00.127 fused_ordering(439) 00:16:00.127 fused_ordering(440) 00:16:00.127 fused_ordering(441) 00:16:00.127 fused_ordering(442) 00:16:00.127 fused_ordering(443) 00:16:00.127 fused_ordering(444) 00:16:00.127 fused_ordering(445) 00:16:00.127 fused_ordering(446) 00:16:00.127 fused_ordering(447) 00:16:00.127 fused_ordering(448) 00:16:00.127 fused_ordering(449) 00:16:00.127 fused_ordering(450) 00:16:00.127 fused_ordering(451) 00:16:00.127 fused_ordering(452) 00:16:00.127 fused_ordering(453) 00:16:00.127 fused_ordering(454) 00:16:00.127 fused_ordering(455) 00:16:00.127 fused_ordering(456) 00:16:00.127 fused_ordering(457) 00:16:00.127 fused_ordering(458) 00:16:00.127 fused_ordering(459) 00:16:00.127 fused_ordering(460) 00:16:00.127 fused_ordering(461) 00:16:00.127 fused_ordering(462) 00:16:00.127 fused_ordering(463) 00:16:00.127 fused_ordering(464) 00:16:00.127 fused_ordering(465) 00:16:00.127 fused_ordering(466) 00:16:00.127 fused_ordering(467) 00:16:00.127 fused_ordering(468) 00:16:00.127 fused_ordering(469) 00:16:00.127 fused_ordering(470) 00:16:00.127 fused_ordering(471) 00:16:00.127 fused_ordering(472) 00:16:00.127 fused_ordering(473) 00:16:00.127 fused_ordering(474) 00:16:00.127 fused_ordering(475) 00:16:00.127 fused_ordering(476) 00:16:00.127 fused_ordering(477) 00:16:00.127 fused_ordering(478) 00:16:00.127 fused_ordering(479) 00:16:00.127 fused_ordering(480) 00:16:00.127 fused_ordering(481) 00:16:00.127 fused_ordering(482) 00:16:00.127 fused_ordering(483) 00:16:00.127 fused_ordering(484) 00:16:00.127 fused_ordering(485) 00:16:00.127 fused_ordering(486) 00:16:00.127 fused_ordering(487) 00:16:00.127 fused_ordering(488) 00:16:00.127 fused_ordering(489) 00:16:00.127 fused_ordering(490) 00:16:00.127 fused_ordering(491) 00:16:00.127 fused_ordering(492) 00:16:00.127 fused_ordering(493) 00:16:00.127 fused_ordering(494) 00:16:00.127 fused_ordering(495) 00:16:00.127 fused_ordering(496) 00:16:00.127 fused_ordering(497) 00:16:00.127 fused_ordering(498) 00:16:00.127 fused_ordering(499) 00:16:00.127 fused_ordering(500) 00:16:00.127 fused_ordering(501) 00:16:00.127 fused_ordering(502) 00:16:00.127 fused_ordering(503) 00:16:00.127 fused_ordering(504) 00:16:00.127 fused_ordering(505) 00:16:00.127 fused_ordering(506) 00:16:00.127 fused_ordering(507) 00:16:00.127 fused_ordering(508) 00:16:00.127 fused_ordering(509) 00:16:00.127 fused_ordering(510) 00:16:00.127 fused_ordering(511) 00:16:00.127 fused_ordering(512) 00:16:00.127 fused_ordering(513) 00:16:00.127 fused_ordering(514) 00:16:00.127 fused_ordering(515) 00:16:00.127 fused_ordering(516) 00:16:00.127 fused_ordering(517) 00:16:00.127 fused_ordering(518) 00:16:00.127 fused_ordering(519) 00:16:00.127 fused_ordering(520) 00:16:00.127 fused_ordering(521) 00:16:00.127 fused_ordering(522) 00:16:00.127 fused_ordering(523) 00:16:00.127 fused_ordering(524) 00:16:00.127 fused_ordering(525) 00:16:00.127 fused_ordering(526) 00:16:00.127 fused_ordering(527) 00:16:00.127 fused_ordering(528) 00:16:00.127 fused_ordering(529) 00:16:00.127 fused_ordering(530) 00:16:00.127 fused_ordering(531) 00:16:00.127 fused_ordering(532) 00:16:00.127 fused_ordering(533) 00:16:00.127 fused_ordering(534) 00:16:00.127 fused_ordering(535) 00:16:00.127 fused_ordering(536) 00:16:00.127 fused_ordering(537) 00:16:00.127 fused_ordering(538) 00:16:00.127 fused_ordering(539) 00:16:00.127 fused_ordering(540) 00:16:00.127 fused_ordering(541) 00:16:00.127 fused_ordering(542) 00:16:00.127 fused_ordering(543) 00:16:00.127 fused_ordering(544) 00:16:00.127 fused_ordering(545) 00:16:00.127 fused_ordering(546) 00:16:00.127 fused_ordering(547) 00:16:00.127 fused_ordering(548) 00:16:00.127 fused_ordering(549) 00:16:00.127 fused_ordering(550) 00:16:00.127 fused_ordering(551) 00:16:00.127 fused_ordering(552) 00:16:00.127 fused_ordering(553) 00:16:00.127 fused_ordering(554) 00:16:00.127 fused_ordering(555) 00:16:00.127 fused_ordering(556) 00:16:00.127 fused_ordering(557) 00:16:00.127 fused_ordering(558) 00:16:00.127 fused_ordering(559) 00:16:00.127 fused_ordering(560) 00:16:00.127 fused_ordering(561) 00:16:00.127 fused_ordering(562) 00:16:00.127 fused_ordering(563) 00:16:00.127 fused_ordering(564) 00:16:00.127 fused_ordering(565) 00:16:00.127 fused_ordering(566) 00:16:00.127 fused_ordering(567) 00:16:00.127 fused_ordering(568) 00:16:00.127 fused_ordering(569) 00:16:00.127 fused_ordering(570) 00:16:00.127 fused_ordering(571) 00:16:00.127 fused_ordering(572) 00:16:00.127 fused_ordering(573) 00:16:00.127 fused_ordering(574) 00:16:00.127 fused_ordering(575) 00:16:00.127 fused_ordering(576) 00:16:00.127 fused_ordering(577) 00:16:00.127 fused_ordering(578) 00:16:00.127 fused_ordering(579) 00:16:00.127 fused_ordering(580) 00:16:00.127 fused_ordering(581) 00:16:00.127 fused_ordering(582) 00:16:00.127 fused_ordering(583) 00:16:00.127 fused_ordering(584) 00:16:00.127 fused_ordering(585) 00:16:00.127 fused_ordering(586) 00:16:00.127 fused_ordering(587) 00:16:00.127 fused_ordering(588) 00:16:00.127 fused_ordering(589) 00:16:00.127 fused_ordering(590) 00:16:00.127 fused_ordering(591) 00:16:00.127 fused_ordering(592) 00:16:00.127 fused_ordering(593) 00:16:00.127 fused_ordering(594) 00:16:00.127 fused_ordering(595) 00:16:00.127 fused_ordering(596) 00:16:00.127 fused_ordering(597) 00:16:00.127 fused_ordering(598) 00:16:00.127 fused_ordering(599) 00:16:00.127 fused_ordering(600) 00:16:00.127 fused_ordering(601) 00:16:00.127 fused_ordering(602) 00:16:00.128 fused_ordering(603) 00:16:00.128 fused_ordering(604) 00:16:00.128 fused_ordering(605) 00:16:00.128 fused_ordering(606) 00:16:00.128 fused_ordering(607) 00:16:00.128 fused_ordering(608) 00:16:00.128 fused_ordering(609) 00:16:00.128 fused_ordering(610) 00:16:00.128 fused_ordering(611) 00:16:00.128 fused_ordering(612) 00:16:00.128 fused_ordering(613) 00:16:00.128 fused_ordering(614) 00:16:00.128 fused_ordering(615) 00:16:00.694 fused_ordering(616) 00:16:00.694 fused_ordering(617) 00:16:00.694 fused_ordering(618) 00:16:00.694 fused_ordering(619) 00:16:00.694 fused_ordering(620) 00:16:00.694 fused_ordering(621) 00:16:00.694 fused_ordering(622) 00:16:00.694 fused_ordering(623) 00:16:00.694 fused_ordering(624) 00:16:00.694 fused_ordering(625) 00:16:00.694 fused_ordering(626) 00:16:00.694 fused_ordering(627) 00:16:00.694 fused_ordering(628) 00:16:00.694 fused_ordering(629) 00:16:00.694 fused_ordering(630) 00:16:00.694 fused_ordering(631) 00:16:00.694 fused_ordering(632) 00:16:00.694 fused_ordering(633) 00:16:00.694 fused_ordering(634) 00:16:00.694 fused_ordering(635) 00:16:00.694 fused_ordering(636) 00:16:00.694 fused_ordering(637) 00:16:00.694 fused_ordering(638) 00:16:00.694 fused_ordering(639) 00:16:00.694 fused_ordering(640) 00:16:00.694 fused_ordering(641) 00:16:00.694 fused_ordering(642) 00:16:00.694 fused_ordering(643) 00:16:00.694 fused_ordering(644) 00:16:00.694 fused_ordering(645) 00:16:00.694 fused_ordering(646) 00:16:00.694 fused_ordering(647) 00:16:00.694 fused_ordering(648) 00:16:00.694 fused_ordering(649) 00:16:00.694 fused_ordering(650) 00:16:00.694 fused_ordering(651) 00:16:00.694 fused_ordering(652) 00:16:00.694 fused_ordering(653) 00:16:00.694 fused_ordering(654) 00:16:00.694 fused_ordering(655) 00:16:00.694 fused_ordering(656) 00:16:00.694 fused_ordering(657) 00:16:00.694 fused_ordering(658) 00:16:00.694 fused_ordering(659) 00:16:00.694 fused_ordering(660) 00:16:00.694 fused_ordering(661) 00:16:00.694 fused_ordering(662) 00:16:00.694 fused_ordering(663) 00:16:00.694 fused_ordering(664) 00:16:00.694 fused_ordering(665) 00:16:00.694 fused_ordering(666) 00:16:00.694 fused_ordering(667) 00:16:00.694 fused_ordering(668) 00:16:00.694 fused_ordering(669) 00:16:00.694 fused_ordering(670) 00:16:00.694 fused_ordering(671) 00:16:00.694 fused_ordering(672) 00:16:00.694 fused_ordering(673) 00:16:00.694 fused_ordering(674) 00:16:00.694 fused_ordering(675) 00:16:00.694 fused_ordering(676) 00:16:00.694 fused_ordering(677) 00:16:00.694 fused_ordering(678) 00:16:00.694 fused_ordering(679) 00:16:00.694 fused_ordering(680) 00:16:00.694 fused_ordering(681) 00:16:00.694 fused_ordering(682) 00:16:00.694 fused_ordering(683) 00:16:00.694 fused_ordering(684) 00:16:00.694 fused_ordering(685) 00:16:00.694 fused_ordering(686) 00:16:00.694 fused_ordering(687) 00:16:00.694 fused_ordering(688) 00:16:00.694 fused_ordering(689) 00:16:00.694 fused_ordering(690) 00:16:00.694 fused_ordering(691) 00:16:00.694 fused_ordering(692) 00:16:00.694 fused_ordering(693) 00:16:00.694 fused_ordering(694) 00:16:00.694 fused_ordering(695) 00:16:00.694 fused_ordering(696) 00:16:00.694 fused_ordering(697) 00:16:00.695 fused_ordering(698) 00:16:00.695 fused_ordering(699) 00:16:00.695 fused_ordering(700) 00:16:00.695 fused_ordering(701) 00:16:00.695 fused_ordering(702) 00:16:00.695 fused_ordering(703) 00:16:00.695 fused_ordering(704) 00:16:00.695 fused_ordering(705) 00:16:00.695 fused_ordering(706) 00:16:00.695 fused_ordering(707) 00:16:00.695 fused_ordering(708) 00:16:00.695 fused_ordering(709) 00:16:00.695 fused_ordering(710) 00:16:00.695 fused_ordering(711) 00:16:00.695 fused_ordering(712) 00:16:00.695 fused_ordering(713) 00:16:00.695 fused_ordering(714) 00:16:00.695 fused_ordering(715) 00:16:00.695 fused_ordering(716) 00:16:00.695 fused_ordering(717) 00:16:00.695 fused_ordering(718) 00:16:00.695 fused_ordering(719) 00:16:00.695 fused_ordering(720) 00:16:00.695 fused_ordering(721) 00:16:00.695 fused_ordering(722) 00:16:00.695 fused_ordering(723) 00:16:00.695 fused_ordering(724) 00:16:00.695 fused_ordering(725) 00:16:00.695 fused_ordering(726) 00:16:00.695 fused_ordering(727) 00:16:00.695 fused_ordering(728) 00:16:00.695 fused_ordering(729) 00:16:00.695 fused_ordering(730) 00:16:00.695 fused_ordering(731) 00:16:00.695 fused_ordering(732) 00:16:00.695 fused_ordering(733) 00:16:00.695 fused_ordering(734) 00:16:00.695 fused_ordering(735) 00:16:00.695 fused_ordering(736) 00:16:00.695 fused_ordering(737) 00:16:00.695 fused_ordering(738) 00:16:00.695 fused_ordering(739) 00:16:00.695 fused_ordering(740) 00:16:00.695 fused_ordering(741) 00:16:00.695 fused_ordering(742) 00:16:00.695 fused_ordering(743) 00:16:00.695 fused_ordering(744) 00:16:00.695 fused_ordering(745) 00:16:00.695 fused_ordering(746) 00:16:00.695 fused_ordering(747) 00:16:00.695 fused_ordering(748) 00:16:00.695 fused_ordering(749) 00:16:00.695 fused_ordering(750) 00:16:00.695 fused_ordering(751) 00:16:00.695 fused_ordering(752) 00:16:00.695 fused_ordering(753) 00:16:00.695 fused_ordering(754) 00:16:00.695 fused_ordering(755) 00:16:00.695 fused_ordering(756) 00:16:00.695 fused_ordering(757) 00:16:00.695 fused_ordering(758) 00:16:00.695 fused_ordering(759) 00:16:00.695 fused_ordering(760) 00:16:00.695 fused_ordering(761) 00:16:00.695 fused_ordering(762) 00:16:00.695 fused_ordering(763) 00:16:00.695 fused_ordering(764) 00:16:00.695 fused_ordering(765) 00:16:00.695 fused_ordering(766) 00:16:00.695 fused_ordering(767) 00:16:00.695 fused_ordering(768) 00:16:00.695 fused_ordering(769) 00:16:00.695 fused_ordering(770) 00:16:00.695 fused_ordering(771) 00:16:00.695 fused_ordering(772) 00:16:00.695 fused_ordering(773) 00:16:00.695 fused_ordering(774) 00:16:00.695 fused_ordering(775) 00:16:00.695 fused_ordering(776) 00:16:00.695 fused_ordering(777) 00:16:00.695 fused_ordering(778) 00:16:00.695 fused_ordering(779) 00:16:00.695 fused_ordering(780) 00:16:00.695 fused_ordering(781) 00:16:00.695 fused_ordering(782) 00:16:00.695 fused_ordering(783) 00:16:00.695 fused_ordering(784) 00:16:00.695 fused_ordering(785) 00:16:00.695 fused_ordering(786) 00:16:00.695 fused_ordering(787) 00:16:00.695 fused_ordering(788) 00:16:00.695 fused_ordering(789) 00:16:00.695 fused_ordering(790) 00:16:00.695 fused_ordering(791) 00:16:00.695 fused_ordering(792) 00:16:00.695 fused_ordering(793) 00:16:00.695 fused_ordering(794) 00:16:00.695 fused_ordering(795) 00:16:00.695 fused_ordering(796) 00:16:00.695 fused_ordering(797) 00:16:00.695 fused_ordering(798) 00:16:00.695 fused_ordering(799) 00:16:00.695 fused_ordering(800) 00:16:00.695 fused_ordering(801) 00:16:00.695 fused_ordering(802) 00:16:00.695 fused_ordering(803) 00:16:00.695 fused_ordering(804) 00:16:00.695 fused_ordering(805) 00:16:00.695 fused_ordering(806) 00:16:00.695 fused_ordering(807) 00:16:00.695 fused_ordering(808) 00:16:00.695 fused_ordering(809) 00:16:00.695 fused_ordering(810) 00:16:00.695 fused_ordering(811) 00:16:00.695 fused_ordering(812) 00:16:00.695 fused_ordering(813) 00:16:00.695 fused_ordering(814) 00:16:00.695 fused_ordering(815) 00:16:00.695 fused_ordering(816) 00:16:00.695 fused_ordering(817) 00:16:00.695 fused_ordering(818) 00:16:00.695 fused_ordering(819) 00:16:00.695 fused_ordering(820) 00:16:02.073 fused_ordering(821) 00:16:02.073 fused_ordering(822) 00:16:02.073 fused_ordering(823) 00:16:02.073 fused_ordering(824) 00:16:02.073 fused_ordering(825) 00:16:02.073 fused_ordering(826) 00:16:02.073 fused_ordering(827) 00:16:02.073 fused_ordering(828) 00:16:02.073 fused_ordering(829) 00:16:02.073 fused_ordering(830) 00:16:02.073 fused_ordering(831) 00:16:02.073 fused_ordering(832) 00:16:02.073 fused_ordering(833) 00:16:02.073 fused_ordering(834) 00:16:02.073 fused_ordering(835) 00:16:02.073 fused_ordering(836) 00:16:02.073 fused_ordering(837) 00:16:02.073 fused_ordering(838) 00:16:02.073 fused_ordering(839) 00:16:02.073 fused_ordering(840) 00:16:02.073 fused_ordering(841) 00:16:02.073 fused_ordering(842) 00:16:02.073 fused_ordering(843) 00:16:02.073 fused_ordering(844) 00:16:02.073 fused_ordering(845) 00:16:02.073 fused_ordering(846) 00:16:02.073 fused_ordering(847) 00:16:02.073 fused_ordering(848) 00:16:02.073 fused_ordering(849) 00:16:02.073 fused_ordering(850) 00:16:02.073 fused_ordering(851) 00:16:02.073 fused_ordering(852) 00:16:02.073 fused_ordering(853) 00:16:02.073 fused_ordering(854) 00:16:02.073 fused_ordering(855) 00:16:02.073 fused_ordering(856) 00:16:02.073 fused_ordering(857) 00:16:02.073 fused_ordering(858) 00:16:02.073 fused_ordering(859) 00:16:02.073 fused_ordering(860) 00:16:02.073 fused_ordering(861) 00:16:02.073 fused_ordering(862) 00:16:02.073 fused_ordering(863) 00:16:02.073 fused_ordering(864) 00:16:02.073 fused_ordering(865) 00:16:02.073 fused_ordering(866) 00:16:02.073 fused_ordering(867) 00:16:02.073 fused_ordering(868) 00:16:02.073 fused_ordering(869) 00:16:02.073 fused_ordering(870) 00:16:02.073 fused_ordering(871) 00:16:02.073 fused_ordering(872) 00:16:02.073 fused_ordering(873) 00:16:02.073 fused_ordering(874) 00:16:02.073 fused_ordering(875) 00:16:02.073 fused_ordering(876) 00:16:02.073 fused_ordering(877) 00:16:02.073 fused_ordering(878) 00:16:02.073 fused_ordering(879) 00:16:02.073 fused_ordering(880) 00:16:02.073 fused_ordering(881) 00:16:02.073 fused_ordering(882) 00:16:02.073 fused_ordering(883) 00:16:02.073 fused_ordering(884) 00:16:02.073 fused_ordering(885) 00:16:02.073 fused_ordering(886) 00:16:02.073 fused_ordering(887) 00:16:02.073 fused_ordering(888) 00:16:02.073 fused_ordering(889) 00:16:02.073 fused_ordering(890) 00:16:02.073 fused_ordering(891) 00:16:02.073 fused_ordering(892) 00:16:02.073 fused_ordering(893) 00:16:02.073 fused_ordering(894) 00:16:02.073 fused_ordering(895) 00:16:02.073 fused_ordering(896) 00:16:02.073 fused_ordering(897) 00:16:02.073 fused_ordering(898) 00:16:02.073 fused_ordering(899) 00:16:02.073 fused_ordering(900) 00:16:02.073 fused_ordering(901) 00:16:02.073 fused_ordering(902) 00:16:02.073 fused_ordering(903) 00:16:02.073 fused_ordering(904) 00:16:02.073 fused_ordering(905) 00:16:02.074 fused_ordering(906) 00:16:02.074 fused_ordering(907) 00:16:02.074 fused_ordering(908) 00:16:02.074 fused_ordering(909) 00:16:02.074 fused_ordering(910) 00:16:02.074 fused_ordering(911) 00:16:02.074 fused_ordering(912) 00:16:02.074 fused_ordering(913) 00:16:02.074 fused_ordering(914) 00:16:02.074 fused_ordering(915) 00:16:02.074 fused_ordering(916) 00:16:02.074 fused_ordering(917) 00:16:02.074 fused_ordering(918) 00:16:02.074 fused_ordering(919) 00:16:02.074 fused_ordering(920) 00:16:02.074 fused_ordering(921) 00:16:02.074 fused_ordering(922) 00:16:02.074 fused_ordering(923) 00:16:02.074 fused_ordering(924) 00:16:02.074 fused_ordering(925) 00:16:02.074 fused_ordering(926) 00:16:02.074 fused_ordering(927) 00:16:02.074 fused_ordering(928) 00:16:02.074 fused_ordering(929) 00:16:02.074 fused_ordering(930) 00:16:02.074 fused_ordering(931) 00:16:02.074 fused_ordering(932) 00:16:02.074 fused_ordering(933) 00:16:02.074 fused_ordering(934) 00:16:02.074 fused_ordering(935) 00:16:02.074 fused_ordering(936) 00:16:02.074 fused_ordering(937) 00:16:02.074 fused_ordering(938) 00:16:02.074 fused_ordering(939) 00:16:02.074 fused_ordering(940) 00:16:02.074 fused_ordering(941) 00:16:02.074 fused_ordering(942) 00:16:02.074 fused_ordering(943) 00:16:02.074 fused_ordering(944) 00:16:02.074 fused_ordering(945) 00:16:02.074 fused_ordering(946) 00:16:02.074 fused_ordering(947) 00:16:02.074 fused_ordering(948) 00:16:02.074 fused_ordering(949) 00:16:02.074 fused_ordering(950) 00:16:02.074 fused_ordering(951) 00:16:02.074 fused_ordering(952) 00:16:02.074 fused_ordering(953) 00:16:02.074 fused_ordering(954) 00:16:02.074 fused_ordering(955) 00:16:02.074 fused_ordering(956) 00:16:02.074 fused_ordering(957) 00:16:02.074 fused_ordering(958) 00:16:02.074 fused_ordering(959) 00:16:02.074 fused_ordering(960) 00:16:02.074 fused_ordering(961) 00:16:02.074 fused_ordering(962) 00:16:02.074 fused_ordering(963) 00:16:02.074 fused_ordering(964) 00:16:02.074 fused_ordering(965) 00:16:02.074 fused_ordering(966) 00:16:02.074 fused_ordering(967) 00:16:02.074 fused_ordering(968) 00:16:02.074 fused_ordering(969) 00:16:02.074 fused_ordering(970) 00:16:02.074 fused_ordering(971) 00:16:02.074 fused_ordering(972) 00:16:02.074 fused_ordering(973) 00:16:02.074 fused_ordering(974) 00:16:02.074 fused_ordering(975) 00:16:02.074 fused_ordering(976) 00:16:02.074 fused_ordering(977) 00:16:02.074 fused_ordering(978) 00:16:02.074 fused_ordering(979) 00:16:02.074 fused_ordering(980) 00:16:02.074 fused_ordering(981) 00:16:02.074 fused_ordering(982) 00:16:02.074 fused_ordering(983) 00:16:02.074 fused_ordering(984) 00:16:02.074 fused_ordering(985) 00:16:02.074 fused_ordering(986) 00:16:02.074 fused_ordering(987) 00:16:02.074 fused_ordering(988) 00:16:02.074 fused_ordering(989) 00:16:02.074 fused_ordering(990) 00:16:02.074 fused_ordering(991) 00:16:02.074 fused_ordering(992) 00:16:02.074 fused_ordering(993) 00:16:02.074 fused_ordering(994) 00:16:02.074 fused_ordering(995) 00:16:02.074 fused_ordering(996) 00:16:02.074 fused_ordering(997) 00:16:02.074 fused_ordering(998) 00:16:02.074 fused_ordering(999) 00:16:02.074 fused_ordering(1000) 00:16:02.074 fused_ordering(1001) 00:16:02.074 fused_ordering(1002) 00:16:02.074 fused_ordering(1003) 00:16:02.074 fused_ordering(1004) 00:16:02.074 fused_ordering(1005) 00:16:02.074 fused_ordering(1006) 00:16:02.074 fused_ordering(1007) 00:16:02.074 fused_ordering(1008) 00:16:02.074 fused_ordering(1009) 00:16:02.074 fused_ordering(1010) 00:16:02.074 fused_ordering(1011) 00:16:02.074 fused_ordering(1012) 00:16:02.074 fused_ordering(1013) 00:16:02.074 fused_ordering(1014) 00:16:02.074 fused_ordering(1015) 00:16:02.074 fused_ordering(1016) 00:16:02.074 fused_ordering(1017) 00:16:02.074 fused_ordering(1018) 00:16:02.074 fused_ordering(1019) 00:16:02.074 fused_ordering(1020) 00:16:02.074 fused_ordering(1021) 00:16:02.074 fused_ordering(1022) 00:16:02.074 fused_ordering(1023) 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.074 rmmod nvme_tcp 00:16:02.074 rmmod nvme_fabrics 00:16:02.074 rmmod nvme_keyring 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1036350 ']' 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1036350 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1036350 ']' 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1036350 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1036350 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1036350' 00:16:02.074 killing process with pid 1036350 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1036350 00:16:02.074 07:42:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1036350 00:16:03.452 07:42:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.452 07:42:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.453 07:42:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.453 07:42:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.453 07:42:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.453 07:42:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.453 07:42:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.453 07:42:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.362 07:42:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.362 00:16:05.362 real 0m10.134s 00:16:05.362 user 0m8.445s 00:16:05.362 sys 0m3.736s 00:16:05.362 07:42:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.362 07:42:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:05.362 ************************************ 00:16:05.362 END TEST nvmf_fused_ordering 00:16:05.362 ************************************ 00:16:05.362 07:42:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:05.362 07:42:56 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:05.362 07:42:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:05.362 07:42:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.362 07:42:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.362 ************************************ 00:16:05.362 START TEST nvmf_delete_subsystem 00:16:05.362 ************************************ 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:05.362 * Looking for test storage... 00:16:05.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.362 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.363 07:42:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.266 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:07.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:07.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:07.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:07.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.267 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:16:07.527 00:16:07.527 --- 10.0.0.2 ping statistics --- 00:16:07.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.527 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:16:07.527 00:16:07.527 --- 10.0.0.1 ping statistics --- 00:16:07.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.527 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1038958 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1038958 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1038958 ']' 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.527 07:42:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.527 [2024-07-15 07:42:58.623758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:07.527 [2024-07-15 07:42:58.623932] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.527 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.788 [2024-07-15 07:42:58.761454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.788 [2024-07-15 07:42:58.988043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.788 [2024-07-15 07:42:58.988125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.788 [2024-07-15 07:42:58.988155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.788 [2024-07-15 07:42:58.988173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.788 [2024-07-15 07:42:58.988191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.788 [2024-07-15 07:42:58.988315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.788 [2024-07-15 07:42:58.988324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.358 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.358 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:16:08.358 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.358 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.358 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 [2024-07-15 07:42:59.611772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 [2024-07-15 07:42:59.629319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 NULL1 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 Delay0 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1039113 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:08.619 07:42:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:08.619 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.619 [2024-07-15 07:42:59.753710] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:10.526 07:43:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.526 07:43:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.526 07:43:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 [2024-07-15 07:43:01.985379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Write completed with error (sct=0, sc=8) 00:16:10.785 starting I/O failed: -6 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.785 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 starting I/O failed: -6 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 [2024-07-15 07:43:01.987232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Read completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:10.786 Write completed with error (sct=0, sc=8) 00:16:11.724 [2024-07-15 07:43:02.935436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 [2024-07-15 07:43:02.987463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 [2024-07-15 07:43:02.988353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 [2024-07-15 07:43:02.989096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Write completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.985 Read completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 Write completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 Write completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 Read completed with error (sct=0, sc=8) 00:16:11.986 07:43:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.986 07:43:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:11.986 07:43:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1039113 00:16:11.986 07:43:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:11.986 [2024-07-15 07:43:02.993991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:16:11.986 Initializing NVMe Controllers 00:16:11.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.986 Controller IO queue size 128, less than required. 00:16:11.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:11.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:11.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:11.986 Initialization complete. Launching workers. 00:16:11.986 ======================================================== 00:16:11.986 Latency(us) 00:16:11.986 Device Information : IOPS MiB/s Average min max 00:16:11.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.45 0.08 898893.01 1276.16 1015961.00 00:16:11.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.39 0.09 883474.47 832.27 1017134.66 00:16:11.986 ======================================================== 00:16:11.986 Total : 345.84 0.17 891029.11 832.27 1017134.66 00:16:11.986 00:16:11.986 [2024-07-15 07:43:02.995604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:16:11.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1039113 00:16:12.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1039113) - No such process 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1039113 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1039113 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1039113 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:12.554 [2024-07-15 07:43:03.513026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1039522 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.554 07:43:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:12.554 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.554 [2024-07-15 07:43:03.634909] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:12.814 07:43:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.814 07:43:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:12.814 07:43:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.383 07:43:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.384 07:43:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:13.384 07:43:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.981 07:43:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.981 07:43:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:13.981 07:43:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:14.549 07:43:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:14.549 07:43:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:14.549 07:43:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:15.117 07:43:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:15.117 07:43:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:15.117 07:43:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:15.376 07:43:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:15.376 07:43:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:15.376 07:43:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:15.633 Initializing NVMe Controllers 00:16:15.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.633 Controller IO queue size 128, less than required. 00:16:15.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:15.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:15.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:15.633 Initialization complete. Launching workers. 00:16:15.633 ======================================================== 00:16:15.633 Latency(us) 00:16:15.633 Device Information : IOPS MiB/s Average min max 00:16:15.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005592.08 1000414.77 1015202.48 00:16:15.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005370.70 1000295.34 1015286.55 00:16:15.633 ======================================================== 00:16:15.633 Total : 256.00 0.12 1005481.39 1000295.34 1015286.55 00:16:15.634 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1039522 00:16:15.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1039522) - No such process 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1039522 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.892 rmmod nvme_tcp 00:16:15.892 rmmod nvme_fabrics 00:16:15.892 rmmod nvme_keyring 00:16:15.892 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1038958 ']' 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1038958 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1038958 ']' 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1038958 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1038958 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1038958' 00:16:16.149 killing process with pid 1038958 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1038958 00:16:16.149 07:43:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1038958 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.526 07:43:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.430 07:43:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.430 00:16:19.430 real 0m14.085s 00:16:19.430 user 0m30.910s 00:16:19.430 sys 0m3.184s 00:16:19.430 07:43:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.430 07:43:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:19.430 ************************************ 00:16:19.430 END TEST nvmf_delete_subsystem 00:16:19.430 ************************************ 00:16:19.430 07:43:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.430 07:43:10 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:19.430 07:43:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.430 07:43:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.430 07:43:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.430 ************************************ 00:16:19.430 START TEST nvmf_ns_masking 00:16:19.430 ************************************ 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:19.430 * Looking for test storage... 00:16:19.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=eedec402-a4f8-44fa-9951-6e5f39af68d9 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=658ccd75-3908-4546-be86-02a366704c59 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=90de7ef6-de5c-4777-b682-7fea45f34c1b 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.430 07:43:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:21.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:21.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:21.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:21.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.333 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:16:21.334 00:16:21.334 --- 10.0.0.2 ping statistics --- 00:16:21.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.334 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:21.334 00:16:21.334 --- 10.0.0.1 ping statistics --- 00:16:21.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.334 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.334 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1041996 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1041996 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1041996 ']' 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.592 07:43:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.592 [2024-07-15 07:43:12.660094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:21.592 [2024-07-15 07:43:12.660250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.592 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.592 [2024-07-15 07:43:12.802578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.852 [2024-07-15 07:43:13.033933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.852 [2024-07-15 07:43:13.033993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.852 [2024-07-15 07:43:13.034019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.852 [2024-07-15 07:43:13.034042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.852 [2024-07-15 07:43:13.034061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.852 [2024-07-15 07:43:13.034102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.420 07:43:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.677 [2024-07-15 07:43:13.821801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.677 07:43:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:22.677 07:43:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:22.677 07:43:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:23.244 Malloc1 00:16:23.244 07:43:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:23.502 Malloc2 00:16:23.502 07:43:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:23.761 07:43:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:24.018 07:43:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.276 [2024-07-15 07:43:15.423036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.276 07:43:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:24.276 07:43:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 90de7ef6-de5c-4777-b682-7fea45f34c1b -a 10.0.0.2 -s 4420 -i 4 00:16:24.533 07:43:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.533 07:43:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.533 07:43:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.533 07:43:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:24.533 07:43:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.438 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:26.696 [ 0]:0x1 00:16:26.696 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.696 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.696 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d6fd35232c2490585fb68adb8cde0fe 00:16:26.696 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d6fd35232c2490585fb68adb8cde0fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.696 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:26.956 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:26.956 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.956 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:26.956 [ 0]:0x1 00:16:26.956 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.956 07:43:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d6fd35232c2490585fb68adb8cde0fe 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d6fd35232c2490585fb68adb8cde0fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:26.956 [ 1]:0x2 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.956 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.216 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:27.475 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:27.475 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 90de7ef6-de5c-4777-b682-7fea45f34c1b -a 10.0.0.2 -s 4420 -i 4 00:16:27.733 07:43:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:27.733 07:43:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:27.733 07:43:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.733 07:43:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:27.733 07:43:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:27.733 07:43:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:29.692 [ 0]:0x2 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:29.692 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:29.954 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:29.954 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:29.954 07:43:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.212 [ 0]:0x1 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d6fd35232c2490585fb68adb8cde0fe 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d6fd35232c2490585fb68adb8cde0fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:30.212 [ 1]:0x2 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.212 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:30.470 [ 0]:0x2 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:30.470 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.727 07:43:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.986 07:43:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:30.986 07:43:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 90de7ef6-de5c-4777-b682-7fea45f34c1b -a 10.0.0.2 -s 4420 -i 4 00:16:31.246 07:43:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:31.246 07:43:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.246 07:43:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.246 07:43:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:31.246 07:43:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:31.246 07:43:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:33.154 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.413 [ 0]:0x1 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d6fd35232c2490585fb68adb8cde0fe 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d6fd35232c2490585fb68adb8cde0fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.413 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:33.671 [ 1]:0x2 00:16:33.671 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.671 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.671 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:33.671 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.671 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.929 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.930 07:43:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:33.930 [ 0]:0x2 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:33.930 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:34.188 [2024-07-15 07:43:25.335625] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:34.188 request: 00:16:34.188 { 00:16:34.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.188 "nsid": 2, 00:16:34.188 "host": "nqn.2016-06.io.spdk:host1", 00:16:34.188 "method": "nvmf_ns_remove_host", 00:16:34.188 "req_id": 1 00:16:34.188 } 00:16:34.188 Got JSON-RPC error response 00:16:34.188 response: 00:16:34.188 { 00:16:34.188 "code": -32602, 00:16:34.188 "message": "Invalid parameters" 00:16:34.188 } 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:34.188 [ 0]:0x2 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:34.188 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8b4bcde875d4d00ba301c39a5cedd3c 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8b4bcde875d4d00ba301c39a5cedd3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1043637 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1043637 /var/tmp/host.sock 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1043637 ']' 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:34.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.447 07:43:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:34.447 [2024-07-15 07:43:25.589525] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:34.447 [2024-07-15 07:43:25.589679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043637 ] 00:16:34.447 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.707 [2024-07-15 07:43:25.717098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.966 [2024-07-15 07:43:25.956033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.902 07:43:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.902 07:43:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:35.902 07:43:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.902 07:43:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:36.160 07:43:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid eedec402-a4f8-44fa-9951-6e5f39af68d9 00:16:36.160 07:43:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:36.160 07:43:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g EEDEC402A4F844FA99516E5F39AF68D9 -i 00:16:36.419 07:43:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 658ccd75-3908-4546-be86-02a366704c59 00:16:36.419 07:43:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:36.419 07:43:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 658CCD7539084546BE8602A366704C59 -i 00:16:36.677 07:43:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:36.935 07:43:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:37.193 07:43:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:37.193 07:43:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:37.759 nvme0n1 00:16:37.759 07:43:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:37.759 07:43:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:38.327 nvme1n2 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:38.327 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:38.585 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ eedec402-a4f8-44fa-9951-6e5f39af68d9 == \e\e\d\e\c\4\0\2\-\a\4\f\8\-\4\4\f\a\-\9\9\5\1\-\6\e\5\f\3\9\a\f\6\8\d\9 ]] 00:16:38.585 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:38.585 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:38.585 07:43:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:38.843 07:43:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 658ccd75-3908-4546-be86-02a366704c59 == \6\5\8\c\c\d\7\5\-\3\9\0\8\-\4\5\4\6\-\b\e\8\6\-\0\2\a\3\6\6\7\0\4\c\5\9 ]] 00:16:38.843 07:43:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1043637 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1043637 ']' 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1043637 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043637 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043637' 00:16:38.844 killing process with pid 1043637 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1043637 00:16:38.844 07:43:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1043637 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.408 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.409 rmmod nvme_tcp 00:16:41.409 rmmod nvme_fabrics 00:16:41.668 rmmod nvme_keyring 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1041996 ']' 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1041996 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1041996 ']' 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1041996 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041996 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041996' 00:16:41.668 killing process with pid 1041996 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1041996 00:16:41.668 07:43:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1041996 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.573 07:43:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.508 07:43:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.508 00:16:45.508 real 0m25.972s 00:16:45.508 user 0m35.354s 00:16:45.508 sys 0m4.329s 00:16:45.508 07:43:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.508 07:43:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:45.508 ************************************ 00:16:45.508 END TEST nvmf_ns_masking 00:16:45.508 ************************************ 00:16:45.508 07:43:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:45.508 07:43:36 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:45.508 07:43:36 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:45.508 07:43:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:45.508 07:43:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.508 07:43:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.508 ************************************ 00:16:45.508 START TEST nvmf_nvme_cli 00:16:45.508 ************************************ 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:45.508 * Looking for test storage... 00:16:45.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.508 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.509 07:43:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:47.413 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:47.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:47.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:47.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:47.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:16:47.414 00:16:47.414 --- 10.0.0.2 ping statistics --- 00:16:47.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.414 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:16:47.414 00:16:47.414 --- 10.0.0.1 ping statistics --- 00:16:47.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.414 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.414 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1046634 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1046634 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1046634 ']' 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.415 07:43:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:47.415 [2024-07-15 07:43:38.616413] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:47.415 [2024-07-15 07:43:38.616565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.674 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.674 [2024-07-15 07:43:38.750959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.932 [2024-07-15 07:43:39.009438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.932 [2024-07-15 07:43:39.009517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.932 [2024-07-15 07:43:39.009546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.932 [2024-07-15 07:43:39.009567] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.932 [2024-07-15 07:43:39.009589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.932 [2024-07-15 07:43:39.009717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.932 [2024-07-15 07:43:39.009765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.932 [2024-07-15 07:43:39.010221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.932 [2024-07-15 07:43:39.010228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 [2024-07-15 07:43:39.613259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 Malloc0 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.497 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.755 Malloc1 00:16:48.755 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.755 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:48.755 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 [2024-07-15 07:43:39.798710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:48.756 00:16:48.756 Discovery Log Number of Records 2, Generation counter 2 00:16:48.756 =====Discovery Log Entry 0====== 00:16:48.756 trtype: tcp 00:16:48.756 adrfam: ipv4 00:16:48.756 subtype: current discovery subsystem 00:16:48.756 treq: not required 00:16:48.756 portid: 0 00:16:48.756 trsvcid: 4420 00:16:48.756 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:48.756 traddr: 10.0.0.2 00:16:48.756 eflags: explicit discovery connections, duplicate discovery information 00:16:48.756 sectype: none 00:16:48.756 =====Discovery Log Entry 1====== 00:16:48.756 trtype: tcp 00:16:48.756 adrfam: ipv4 00:16:48.756 subtype: nvme subsystem 00:16:48.756 treq: not required 00:16:48.756 portid: 0 00:16:48.756 trsvcid: 4420 00:16:48.756 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:48.756 traddr: 10.0.0.2 00:16:48.756 eflags: none 00:16:48.756 sectype: none 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:48.756 07:43:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.695 07:43:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:49.695 07:43:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:49.695 07:43:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.695 07:43:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:49.695 07:43:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:49.695 07:43:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:51.600 /dev/nvme0n1 ]] 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.600 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:51.858 07:43:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:52.115 rmmod nvme_tcp 00:16:52.115 rmmod nvme_fabrics 00:16:52.115 rmmod nvme_keyring 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1046634 ']' 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1046634 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1046634 ']' 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1046634 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1046634 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1046634' 00:16:52.115 killing process with pid 1046634 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1046634 00:16:52.115 07:43:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1046634 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.022 07:43:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.930 07:43:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:55.930 00:16:55.930 real 0m10.421s 00:16:55.930 user 0m22.530s 00:16:55.930 sys 0m2.317s 00:16:55.930 07:43:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.930 07:43:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:55.930 ************************************ 00:16:55.930 END TEST nvmf_nvme_cli 00:16:55.930 ************************************ 00:16:55.930 07:43:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:55.930 07:43:46 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:55.930 07:43:46 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:55.930 07:43:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:55.930 07:43:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.930 07:43:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.930 ************************************ 00:16:55.930 START TEST nvmf_host_management 00:16:55.930 ************************************ 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:55.930 * Looking for test storage... 00:16:55.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.930 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:55.931 07:43:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:57.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:57.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.833 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:57.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:57.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:57.834 07:43:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.834 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:16:58.092 00:16:58.092 --- 10.0.0.2 ping statistics --- 00:16:58.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.092 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:58.092 00:16:58.092 --- 10.0.0.1 ping statistics --- 00:16:58.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.092 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1049282 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1049282 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1049282 ']' 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.092 07:43:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.092 [2024-07-15 07:43:49.263006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:58.092 [2024-07-15 07:43:49.263158] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.350 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.350 [2024-07-15 07:43:49.395794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.609 [2024-07-15 07:43:49.652784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.609 [2024-07-15 07:43:49.652865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.609 [2024-07-15 07:43:49.652918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.609 [2024-07-15 07:43:49.652958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.609 [2024-07-15 07:43:49.652997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.609 [2024-07-15 07:43:49.653160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.609 [2024-07-15 07:43:49.653232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.609 [2024-07-15 07:43:49.653310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.609 [2024-07-15 07:43:49.653314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 [2024-07-15 07:43:50.249277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 Malloc0 00:16:59.175 [2024-07-15 07:43:50.362781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1049454 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1049454 /var/tmp/bdevperf.sock 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1049454 ']' 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.175 { 00:16:59.175 "params": { 00:16:59.175 "name": "Nvme$subsystem", 00:16:59.175 "trtype": "$TEST_TRANSPORT", 00:16:59.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.175 "adrfam": "ipv4", 00:16:59.175 "trsvcid": "$NVMF_PORT", 00:16:59.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.175 "hdgst": ${hdgst:-false}, 00:16:59.175 "ddgst": ${ddgst:-false} 00:16:59.175 }, 00:16:59.175 "method": "bdev_nvme_attach_controller" 00:16:59.175 } 00:16:59.175 EOF 00:16:59.175 )") 00:16:59.175 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:59.433 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:59.433 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:59.433 07:43:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.433 "params": { 00:16:59.433 "name": "Nvme0", 00:16:59.433 "trtype": "tcp", 00:16:59.433 "traddr": "10.0.0.2", 00:16:59.433 "adrfam": "ipv4", 00:16:59.433 "trsvcid": "4420", 00:16:59.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:59.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:59.433 "hdgst": false, 00:16:59.433 "ddgst": false 00:16:59.433 }, 00:16:59.433 "method": "bdev_nvme_attach_controller" 00:16:59.433 }' 00:16:59.433 [2024-07-15 07:43:50.481057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:59.433 [2024-07-15 07:43:50.481215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049454 ] 00:16:59.433 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.433 [2024-07-15 07:43:50.617399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.691 [2024-07-15 07:43:50.854298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.258 Running I/O for 10 seconds... 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.258 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:00.258 [2024-07-15 07:43:51.483596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.258 [2024-07-15 07:43:51.483684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.258 [2024-07-15 07:43:51.483724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.258 [2024-07-15 07:43:51.483747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.483768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.259 [2024-07-15 07:43:51.483788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.483810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.259 [2024-07-15 07:43:51.483830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.483851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:17:00.259 [2024-07-15 07:43:51.484323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.484957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.484980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.485960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.485984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.486006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.486028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.486050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.486073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.486095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.486118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.486140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.486163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.486195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.486218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.259 [2024-07-15 07:43:51.486240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.259 [2024-07-15 07:43:51.486270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.260 [2024-07-15 07:43:51.486594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.260 [2024-07-15 07:43:51.486616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.486969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.486990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.519 [2024-07-15 07:43:51.487068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:00.519 [2024-07-15 07:43:51.487237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.519 [2024-07-15 07:43:51.487381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 [2024-07-15 07:43:51.487404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.519 [2024-07-15 07:43:51.487426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.519 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:00.519 [2024-07-15 07:43:51.487735] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:17:00.519 [2024-07-15 07:43:51.488979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:00.519 task offset: 24576 on job bdev=Nvme0n1 fails 00:17:00.519 00:17:00.519 Latency(us) 00:17:00.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.519 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.519 Job: Nvme0n1 ended in about 0.17 seconds with error 00:17:00.519 Verification LBA range: start 0x0 length 0x400 00:17:00.519 Nvme0n1 : 0.17 1119.82 69.99 373.27 0.00 40171.19 4611.79 41943.04 00:17:00.519 =================================================================================================================== 00:17:00.519 Total : 1119.82 69.99 373.27 0.00 40171.19 4611.79 41943.04 00:17:00.519 [2024-07-15 07:43:51.494104] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:00.519 [2024-07-15 07:43:51.494152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:17:00.519 07:43:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.519 07:43:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:00.519 [2024-07-15 07:43:51.540989] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1049454 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:01.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1049454 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:01.463 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:01.463 { 00:17:01.463 "params": { 00:17:01.463 "name": "Nvme$subsystem", 00:17:01.463 "trtype": "$TEST_TRANSPORT", 00:17:01.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.463 "adrfam": "ipv4", 00:17:01.463 "trsvcid": "$NVMF_PORT", 00:17:01.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.464 "hdgst": ${hdgst:-false}, 00:17:01.464 "ddgst": ${ddgst:-false} 00:17:01.464 }, 00:17:01.464 "method": "bdev_nvme_attach_controller" 00:17:01.464 } 00:17:01.464 EOF 00:17:01.464 )") 00:17:01.464 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:01.464 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:01.464 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:01.464 07:43:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:01.464 "params": { 00:17:01.464 "name": "Nvme0", 00:17:01.464 "trtype": "tcp", 00:17:01.464 "traddr": "10.0.0.2", 00:17:01.464 "adrfam": "ipv4", 00:17:01.464 "trsvcid": "4420", 00:17:01.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:01.464 "hdgst": false, 00:17:01.464 "ddgst": false 00:17:01.464 }, 00:17:01.464 "method": "bdev_nvme_attach_controller" 00:17:01.464 }' 00:17:01.464 [2024-07-15 07:43:52.577520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:01.464 [2024-07-15 07:43:52.577656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049737 ] 00:17:01.464 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.773 [2024-07-15 07:43:52.703660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.773 [2024-07-15 07:43:52.944008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.336 Running I/O for 1 seconds... 00:17:03.704 00:17:03.704 Latency(us) 00:17:03.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.704 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.704 Verification LBA range: start 0x0 length 0x400 00:17:03.704 Nvme0n1 : 1.01 1329.95 83.12 0.00 0.00 47301.67 8107.05 41166.32 00:17:03.704 =================================================================================================================== 00:17:03.704 Total : 1329.95 83.12 0.00 0.00 47301.67 8107.05 41166.32 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.634 rmmod nvme_tcp 00:17:04.634 rmmod nvme_fabrics 00:17:04.634 rmmod nvme_keyring 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1049282 ']' 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1049282 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1049282 ']' 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1049282 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1049282 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1049282' 00:17:04.634 killing process with pid 1049282 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1049282 00:17:04.634 07:43:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1049282 00:17:06.006 [2024-07-15 07:43:56.941301] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.006 07:43:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.911 07:43:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.911 07:43:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:07.911 00:17:07.911 real 0m12.048s 00:17:07.911 user 0m33.232s 00:17:07.911 sys 0m2.973s 00:17:07.911 07:43:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.911 07:43:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:07.911 ************************************ 00:17:07.911 END TEST nvmf_host_management 00:17:07.911 ************************************ 00:17:07.911 07:43:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:07.911 07:43:59 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:07.911 07:43:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:07.911 07:43:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.911 07:43:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.911 ************************************ 00:17:07.911 START TEST nvmf_lvol 00:17:07.911 ************************************ 00:17:07.912 07:43:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:08.170 * Looking for test storage... 00:17:08.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.170 07:43:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.069 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:17:10.070 00:17:10.070 --- 10.0.0.2 ping statistics --- 00:17:10.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.070 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:17:10.070 00:17:10.070 --- 10.0.0.1 ping statistics --- 00:17:10.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.070 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1052190 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1052190 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1052190 ']' 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.070 07:44:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:10.328 [2024-07-15 07:44:01.340010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:10.328 [2024-07-15 07:44:01.340169] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.328 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.328 [2024-07-15 07:44:01.476012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.586 [2024-07-15 07:44:01.732804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.586 [2024-07-15 07:44:01.732890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.586 [2024-07-15 07:44:01.732926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.586 [2024-07-15 07:44:01.732947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.586 [2024-07-15 07:44:01.732979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.586 [2024-07-15 07:44:01.733100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.586 [2024-07-15 07:44:01.733137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.586 [2024-07-15 07:44:01.733151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.151 07:44:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:11.409 [2024-07-15 07:44:02.485806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.409 07:44:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.666 07:44:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:11.666 07:44:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.924 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:11.924 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:12.183 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:12.440 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8aaa5175-5b74-442a-8bda-739d95aae181 00:17:12.440 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8aaa5175-5b74-442a-8bda-739d95aae181 lvol 20 00:17:12.698 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a4c294c0-6271-411b-88f7-5279a5ac34bc 00:17:12.698 07:44:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:12.956 07:44:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4c294c0-6271-411b-88f7-5279a5ac34bc 00:17:13.213 07:44:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.470 [2024-07-15 07:44:04.614273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.470 07:44:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:13.727 07:44:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1052638 00:17:13.727 07:44:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:13.727 07:44:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:13.984 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.917 07:44:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a4c294c0-6271-411b-88f7-5279a5ac34bc MY_SNAPSHOT 00:17:15.175 07:44:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1e8797f9-964f-4ff5-87fd-0483c5d91d48 00:17:15.175 07:44:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a4c294c0-6271-411b-88f7-5279a5ac34bc 30 00:17:15.433 07:44:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1e8797f9-964f-4ff5-87fd-0483c5d91d48 MY_CLONE 00:17:15.691 07:44:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=36edcef9-e5d9-4d50-baab-7ab76f63b7b2 00:17:15.691 07:44:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 36edcef9-e5d9-4d50-baab-7ab76f63b7b2 00:17:16.624 07:44:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1052638 00:17:24.796 Initializing NVMe Controllers 00:17:24.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:24.796 Controller IO queue size 128, less than required. 00:17:24.796 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:24.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:24.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:24.796 Initialization complete. Launching workers. 00:17:24.796 ======================================================== 00:17:24.796 Latency(us) 00:17:24.796 Device Information : IOPS MiB/s Average min max 00:17:24.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8254.26 32.24 15522.36 544.72 179656.01 00:17:24.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8102.17 31.65 15797.06 3427.86 186852.40 00:17:24.796 ======================================================== 00:17:24.796 Total : 16356.44 63.89 15658.43 544.72 186852.40 00:17:24.796 00:17:24.796 07:44:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:24.796 07:44:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4c294c0-6271-411b-88f7-5279a5ac34bc 00:17:24.796 07:44:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8aaa5175-5b74-442a-8bda-739d95aae181 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.054 rmmod nvme_tcp 00:17:25.054 rmmod nvme_fabrics 00:17:25.054 rmmod nvme_keyring 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1052190 ']' 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1052190 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1052190 ']' 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1052190 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1052190 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1052190' 00:17:25.054 killing process with pid 1052190 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1052190 00:17:25.054 07:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1052190 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.959 07:44:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.869 07:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.869 00:17:28.869 real 0m20.734s 00:17:28.869 user 1m8.844s 00:17:28.870 sys 0m5.530s 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:28.870 ************************************ 00:17:28.870 END TEST nvmf_lvol 00:17:28.870 ************************************ 00:17:28.870 07:44:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.870 07:44:19 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:28.870 07:44:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.870 07:44:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.870 07:44:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.870 ************************************ 00:17:28.870 START TEST nvmf_lvs_grow 00:17:28.870 ************************************ 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:28.870 * Looking for test storage... 00:17:28.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.870 07:44:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.776 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.777 07:44:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:31.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:17:31.036 00:17:31.036 --- 10.0.0.2 ping statistics --- 00:17:31.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.036 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:17:31.036 00:17:31.036 --- 10.0.0.1 ping statistics --- 00:17:31.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.036 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1056031 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1056031 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1056031 ']' 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.036 07:44:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:31.036 [2024-07-15 07:44:22.158962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:31.036 [2024-07-15 07:44:22.159092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.036 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.294 [2024-07-15 07:44:22.297935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.551 [2024-07-15 07:44:22.554801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.551 [2024-07-15 07:44:22.554888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.551 [2024-07-15 07:44:22.554917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.551 [2024-07-15 07:44:22.554943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.551 [2024-07-15 07:44:22.554965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.551 [2024-07-15 07:44:22.555033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.117 [2024-07-15 07:44:23.310429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.117 07:44:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:32.376 ************************************ 00:17:32.376 START TEST lvs_grow_clean 00:17:32.376 ************************************ 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.376 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:32.634 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:32.634 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:32.891 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:32.891 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:32.891 07:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:33.148 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:33.148 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:33.148 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 97a1feed-49a6-40f5-a46e-a905644ccb75 lvol 150 00:17:33.405 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc995ee0-99f6-40fe-a635-6fb6ef4dd175 00:17:33.405 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:33.405 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:33.405 [2024-07-15 07:44:24.621535] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:33.405 [2024-07-15 07:44:24.621671] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:33.405 true 00:17:33.665 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:33.665 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:33.665 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:33.665 07:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:34.231 07:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc995ee0-99f6-40fe-a635-6fb6ef4dd175 00:17:34.489 07:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:34.489 [2024-07-15 07:44:25.713192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.745 07:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1056599 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1056599 /var/tmp/bdevperf.sock 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1056599 ']' 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.001 07:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:35.001 [2024-07-15 07:44:26.096070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:35.001 [2024-07-15 07:44:26.096237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056599 ] 00:17:35.001 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.001 [2024-07-15 07:44:26.227057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.259 [2024-07-15 07:44:26.478610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.823 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.823 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:35.823 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:36.386 Nvme0n1 00:17:36.386 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:36.648 [ 00:17:36.648 { 00:17:36.648 "name": "Nvme0n1", 00:17:36.648 "aliases": [ 00:17:36.648 "dc995ee0-99f6-40fe-a635-6fb6ef4dd175" 00:17:36.648 ], 00:17:36.648 "product_name": "NVMe disk", 00:17:36.648 "block_size": 4096, 00:17:36.648 "num_blocks": 38912, 00:17:36.648 "uuid": "dc995ee0-99f6-40fe-a635-6fb6ef4dd175", 00:17:36.648 "assigned_rate_limits": { 00:17:36.648 "rw_ios_per_sec": 0, 00:17:36.648 "rw_mbytes_per_sec": 0, 00:17:36.648 "r_mbytes_per_sec": 0, 00:17:36.648 "w_mbytes_per_sec": 0 00:17:36.648 }, 00:17:36.648 "claimed": false, 00:17:36.648 "zoned": false, 00:17:36.648 "supported_io_types": { 00:17:36.648 "read": true, 00:17:36.648 "write": true, 00:17:36.648 "unmap": true, 00:17:36.648 "flush": true, 00:17:36.648 "reset": true, 00:17:36.648 "nvme_admin": true, 00:17:36.648 "nvme_io": true, 00:17:36.648 "nvme_io_md": false, 00:17:36.648 "write_zeroes": true, 00:17:36.648 "zcopy": false, 00:17:36.648 "get_zone_info": false, 00:17:36.648 "zone_management": false, 00:17:36.648 "zone_append": false, 00:17:36.648 "compare": true, 00:17:36.648 "compare_and_write": true, 00:17:36.648 "abort": true, 00:17:36.648 "seek_hole": false, 00:17:36.648 "seek_data": false, 00:17:36.648 "copy": true, 00:17:36.648 "nvme_iov_md": false 00:17:36.648 }, 00:17:36.648 "memory_domains": [ 00:17:36.648 { 00:17:36.648 "dma_device_id": "system", 00:17:36.648 "dma_device_type": 1 00:17:36.648 } 00:17:36.648 ], 00:17:36.648 "driver_specific": { 00:17:36.648 "nvme": [ 00:17:36.648 { 00:17:36.648 "trid": { 00:17:36.648 "trtype": "TCP", 00:17:36.648 "adrfam": "IPv4", 00:17:36.648 "traddr": "10.0.0.2", 00:17:36.648 "trsvcid": "4420", 00:17:36.648 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:36.648 }, 00:17:36.648 "ctrlr_data": { 00:17:36.648 "cntlid": 1, 00:17:36.648 "vendor_id": "0x8086", 00:17:36.648 "model_number": "SPDK bdev Controller", 00:17:36.648 "serial_number": "SPDK0", 00:17:36.648 "firmware_revision": "24.09", 00:17:36.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:36.648 "oacs": { 00:17:36.648 "security": 0, 00:17:36.648 "format": 0, 00:17:36.648 "firmware": 0, 00:17:36.648 "ns_manage": 0 00:17:36.648 }, 00:17:36.648 "multi_ctrlr": true, 00:17:36.648 "ana_reporting": false 00:17:36.648 }, 00:17:36.648 "vs": { 00:17:36.648 "nvme_version": "1.3" 00:17:36.648 }, 00:17:36.648 "ns_data": { 00:17:36.648 "id": 1, 00:17:36.648 "can_share": true 00:17:36.648 } 00:17:36.648 } 00:17:36.648 ], 00:17:36.648 "mp_policy": "active_passive" 00:17:36.648 } 00:17:36.648 } 00:17:36.648 ] 00:17:36.648 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1056739 00:17:36.648 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:36.648 07:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:36.923 Running I/O for 10 seconds... 00:17:37.867 Latency(us) 00:17:37.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.867 Nvme0n1 : 1.00 10796.00 42.17 0.00 0.00 0.00 0.00 0.00 00:17:37.867 =================================================================================================================== 00:17:37.867 Total : 10796.00 42.17 0.00 0.00 0.00 0.00 0.00 00:17:37.867 00:17:38.802 07:44:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:38.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.802 Nvme0n1 : 2.00 10922.50 42.67 0.00 0.00 0.00 0.00 0.00 00:17:38.802 =================================================================================================================== 00:17:38.802 Total : 10922.50 42.67 0.00 0.00 0.00 0.00 0.00 00:17:38.802 00:17:39.060 true 00:17:39.060 07:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:39.060 07:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:39.320 07:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:39.320 07:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:39.320 07:44:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1056739 00:17:39.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.890 Nvme0n1 : 3.00 10922.33 42.67 0.00 0.00 0.00 0.00 0.00 00:17:39.890 =================================================================================================================== 00:17:39.890 Total : 10922.33 42.67 0.00 0.00 0.00 0.00 0.00 00:17:39.890 00:17:40.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.830 Nvme0n1 : 4.00 10922.25 42.67 0.00 0.00 0.00 0.00 0.00 00:17:40.830 =================================================================================================================== 00:17:40.830 Total : 10922.25 42.67 0.00 0.00 0.00 0.00 0.00 00:17:40.830 00:17:41.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.768 Nvme0n1 : 5.00 10973.00 42.86 0.00 0.00 0.00 0.00 0.00 00:17:41.768 =================================================================================================================== 00:17:41.768 Total : 10973.00 42.86 0.00 0.00 0.00 0.00 0.00 00:17:41.768 00:17:42.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.705 Nvme0n1 : 6.00 11017.50 43.04 0.00 0.00 0.00 0.00 0.00 00:17:42.705 =================================================================================================================== 00:17:42.705 Total : 11017.50 43.04 0.00 0.00 0.00 0.00 0.00 00:17:42.705 00:17:44.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.086 Nvme0n1 : 7.00 11012.86 43.02 0.00 0.00 0.00 0.00 0.00 00:17:44.086 =================================================================================================================== 00:17:44.086 Total : 11012.86 43.02 0.00 0.00 0.00 0.00 0.00 00:17:44.086 00:17:45.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.021 Nvme0n1 : 8.00 11033.25 43.10 0.00 0.00 0.00 0.00 0.00 00:17:45.021 =================================================================================================================== 00:17:45.022 Total : 11033.25 43.10 0.00 0.00 0.00 0.00 0.00 00:17:45.022 00:17:45.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.958 Nvme0n1 : 9.00 11049.11 43.16 0.00 0.00 0.00 0.00 0.00 00:17:45.958 =================================================================================================================== 00:17:45.958 Total : 11049.11 43.16 0.00 0.00 0.00 0.00 0.00 00:17:45.958 00:17:46.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.895 Nvme0n1 : 10.00 11050.80 43.17 0.00 0.00 0.00 0.00 0.00 00:17:46.895 =================================================================================================================== 00:17:46.895 Total : 11050.80 43.17 0.00 0.00 0.00 0.00 0.00 00:17:46.895 00:17:46.895 00:17:46.895 Latency(us) 00:17:46.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.895 Nvme0n1 : 10.00 11056.81 43.19 0.00 0.00 11569.48 8252.68 22913.33 00:17:46.895 =================================================================================================================== 00:17:46.895 Total : 11056.81 43.19 0.00 0.00 11569.48 8252.68 22913.33 00:17:46.895 0 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1056599 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1056599 ']' 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1056599 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1056599 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1056599' 00:17:46.895 killing process with pid 1056599 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1056599 00:17:46.895 Received shutdown signal, test time was about 10.000000 seconds 00:17:46.895 00:17:46.895 Latency(us) 00:17:46.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.895 =================================================================================================================== 00:17:46.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.895 07:44:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1056599 00:17:47.832 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:48.398 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:48.398 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:48.398 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:48.657 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:48.657 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:48.657 07:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:48.917 [2024-07-15 07:44:40.109037] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:48.917 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:48.917 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:48.917 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:48.918 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:49.177 request: 00:17:49.177 { 00:17:49.177 "uuid": "97a1feed-49a6-40f5-a46e-a905644ccb75", 00:17:49.177 "method": "bdev_lvol_get_lvstores", 00:17:49.177 "req_id": 1 00:17:49.177 } 00:17:49.177 Got JSON-RPC error response 00:17:49.177 response: 00:17:49.177 { 00:17:49.177 "code": -19, 00:17:49.177 "message": "No such device" 00:17:49.177 } 00:17:49.177 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:49.177 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:49.177 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:49.177 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:49.177 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:49.745 aio_bdev 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc995ee0-99f6-40fe-a635-6fb6ef4dd175 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=dc995ee0-99f6-40fe-a635-6fb6ef4dd175 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:49.745 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:50.002 07:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc995ee0-99f6-40fe-a635-6fb6ef4dd175 -t 2000 00:17:50.261 [ 00:17:50.261 { 00:17:50.261 "name": "dc995ee0-99f6-40fe-a635-6fb6ef4dd175", 00:17:50.261 "aliases": [ 00:17:50.261 "lvs/lvol" 00:17:50.261 ], 00:17:50.261 "product_name": "Logical Volume", 00:17:50.261 "block_size": 4096, 00:17:50.261 "num_blocks": 38912, 00:17:50.261 "uuid": "dc995ee0-99f6-40fe-a635-6fb6ef4dd175", 00:17:50.261 "assigned_rate_limits": { 00:17:50.261 "rw_ios_per_sec": 0, 00:17:50.261 "rw_mbytes_per_sec": 0, 00:17:50.261 "r_mbytes_per_sec": 0, 00:17:50.261 "w_mbytes_per_sec": 0 00:17:50.261 }, 00:17:50.261 "claimed": false, 00:17:50.261 "zoned": false, 00:17:50.261 "supported_io_types": { 00:17:50.261 "read": true, 00:17:50.261 "write": true, 00:17:50.261 "unmap": true, 00:17:50.261 "flush": false, 00:17:50.261 "reset": true, 00:17:50.261 "nvme_admin": false, 00:17:50.261 "nvme_io": false, 00:17:50.261 "nvme_io_md": false, 00:17:50.261 "write_zeroes": true, 00:17:50.261 "zcopy": false, 00:17:50.261 "get_zone_info": false, 00:17:50.261 "zone_management": false, 00:17:50.261 "zone_append": false, 00:17:50.261 "compare": false, 00:17:50.261 "compare_and_write": false, 00:17:50.261 "abort": false, 00:17:50.261 "seek_hole": true, 00:17:50.261 "seek_data": true, 00:17:50.261 "copy": false, 00:17:50.261 "nvme_iov_md": false 00:17:50.261 }, 00:17:50.261 "driver_specific": { 00:17:50.261 "lvol": { 00:17:50.261 "lvol_store_uuid": "97a1feed-49a6-40f5-a46e-a905644ccb75", 00:17:50.261 "base_bdev": "aio_bdev", 00:17:50.261 "thin_provision": false, 00:17:50.261 "num_allocated_clusters": 38, 00:17:50.261 "snapshot": false, 00:17:50.261 "clone": false, 00:17:50.261 "esnap_clone": false 00:17:50.261 } 00:17:50.261 } 00:17:50.261 } 00:17:50.261 ] 00:17:50.261 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:50.261 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:50.261 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:50.521 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:50.521 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:50.521 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:50.781 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:50.781 07:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc995ee0-99f6-40fe-a635-6fb6ef4dd175 00:17:51.042 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97a1feed-49a6-40f5-a46e-a905644ccb75 00:17:51.300 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.587 00:17:51.587 real 0m19.345s 00:17:51.587 user 0m18.948s 00:17:51.587 sys 0m2.030s 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.587 ************************************ 00:17:51.587 END TEST lvs_grow_clean 00:17:51.587 ************************************ 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:51.587 ************************************ 00:17:51.587 START TEST lvs_grow_dirty 00:17:51.587 ************************************ 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.587 07:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.845 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:51.845 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:52.103 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b06416ca-b264-41c9-a355-55b15043df76 00:17:52.103 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:17:52.103 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:52.362 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:52.362 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:52.362 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b06416ca-b264-41c9-a355-55b15043df76 lvol 150 00:17:52.620 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=969e5cb1-ba44-42b5-a278-01b8967dfcca 00:17:52.620 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:52.620 07:44:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:52.878 [2024-07-15 07:44:44.048537] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:52.878 [2024-07-15 07:44:44.048684] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:52.878 true 00:17:52.878 07:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:17:52.878 07:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:53.135 07:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:53.135 07:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:53.393 07:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 969e5cb1-ba44-42b5-a278-01b8967dfcca 00:17:53.651 07:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:53.909 [2024-07-15 07:44:45.087906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.909 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1058902 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1058902 /var/tmp/bdevperf.sock 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1058902 ']' 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.168 07:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:54.428 [2024-07-15 07:44:45.429718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:54.428 [2024-07-15 07:44:45.429868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058902 ] 00:17:54.428 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.428 [2024-07-15 07:44:45.559356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.689 [2024-07-15 07:44:45.795108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.256 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.256 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:55.256 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:55.513 Nvme0n1 00:17:55.513 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:55.771 [ 00:17:55.771 { 00:17:55.771 "name": "Nvme0n1", 00:17:55.771 "aliases": [ 00:17:55.771 "969e5cb1-ba44-42b5-a278-01b8967dfcca" 00:17:55.771 ], 00:17:55.771 "product_name": "NVMe disk", 00:17:55.771 "block_size": 4096, 00:17:55.771 "num_blocks": 38912, 00:17:55.771 "uuid": "969e5cb1-ba44-42b5-a278-01b8967dfcca", 00:17:55.771 "assigned_rate_limits": { 00:17:55.771 "rw_ios_per_sec": 0, 00:17:55.771 "rw_mbytes_per_sec": 0, 00:17:55.771 "r_mbytes_per_sec": 0, 00:17:55.771 "w_mbytes_per_sec": 0 00:17:55.771 }, 00:17:55.771 "claimed": false, 00:17:55.771 "zoned": false, 00:17:55.771 "supported_io_types": { 00:17:55.771 "read": true, 00:17:55.771 "write": true, 00:17:55.771 "unmap": true, 00:17:55.771 "flush": true, 00:17:55.771 "reset": true, 00:17:55.771 "nvme_admin": true, 00:17:55.771 "nvme_io": true, 00:17:55.771 "nvme_io_md": false, 00:17:55.771 "write_zeroes": true, 00:17:55.771 "zcopy": false, 00:17:55.771 "get_zone_info": false, 00:17:55.771 "zone_management": false, 00:17:55.771 "zone_append": false, 00:17:55.771 "compare": true, 00:17:55.771 "compare_and_write": true, 00:17:55.771 "abort": true, 00:17:55.771 "seek_hole": false, 00:17:55.771 "seek_data": false, 00:17:55.771 "copy": true, 00:17:55.771 "nvme_iov_md": false 00:17:55.771 }, 00:17:55.771 "memory_domains": [ 00:17:55.771 { 00:17:55.771 "dma_device_id": "system", 00:17:55.771 "dma_device_type": 1 00:17:55.771 } 00:17:55.771 ], 00:17:55.771 "driver_specific": { 00:17:55.771 "nvme": [ 00:17:55.771 { 00:17:55.771 "trid": { 00:17:55.771 "trtype": "TCP", 00:17:55.771 "adrfam": "IPv4", 00:17:55.771 "traddr": "10.0.0.2", 00:17:55.771 "trsvcid": "4420", 00:17:55.771 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:55.771 }, 00:17:55.771 "ctrlr_data": { 00:17:55.771 "cntlid": 1, 00:17:55.771 "vendor_id": "0x8086", 00:17:55.771 "model_number": "SPDK bdev Controller", 00:17:55.771 "serial_number": "SPDK0", 00:17:55.771 "firmware_revision": "24.09", 00:17:55.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:55.771 "oacs": { 00:17:55.771 "security": 0, 00:17:55.771 "format": 0, 00:17:55.771 "firmware": 0, 00:17:55.771 "ns_manage": 0 00:17:55.771 }, 00:17:55.771 "multi_ctrlr": true, 00:17:55.771 "ana_reporting": false 00:17:55.771 }, 00:17:55.771 "vs": { 00:17:55.771 "nvme_version": "1.3" 00:17:55.771 }, 00:17:55.771 "ns_data": { 00:17:55.771 "id": 1, 00:17:55.771 "can_share": true 00:17:55.771 } 00:17:55.771 } 00:17:55.771 ], 00:17:55.771 "mp_policy": "active_passive" 00:17:55.771 } 00:17:55.771 } 00:17:55.771 ] 00:17:55.771 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1059045 00:17:55.771 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:55.771 07:44:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.030 Running I/O for 10 seconds... 00:17:56.970 Latency(us) 00:17:56.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.970 Nvme0n1 : 1.00 11115.00 43.42 0.00 0.00 0.00 0.00 0.00 00:17:56.970 =================================================================================================================== 00:17:56.970 Total : 11115.00 43.42 0.00 0.00 0.00 0.00 0.00 00:17:56.970 00:17:57.907 07:44:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b06416ca-b264-41c9-a355-55b15043df76 00:17:57.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.907 Nvme0n1 : 2.00 11018.50 43.04 0.00 0.00 0.00 0.00 0.00 00:17:57.907 =================================================================================================================== 00:17:57.907 Total : 11018.50 43.04 0.00 0.00 0.00 0.00 0.00 00:17:57.907 00:17:58.164 true 00:17:58.164 07:44:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:17:58.164 07:44:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:58.423 07:44:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:58.423 07:44:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:58.423 07:44:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1059045 00:17:59.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.016 Nvme0n1 : 3.00 11113.33 43.41 0.00 0.00 0.00 0.00 0.00 00:17:59.017 =================================================================================================================== 00:17:59.017 Total : 11113.33 43.41 0.00 0.00 0.00 0.00 0.00 00:17:59.017 00:17:59.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.954 Nvme0n1 : 4.00 11160.75 43.60 0.00 0.00 0.00 0.00 0.00 00:17:59.954 =================================================================================================================== 00:17:59.954 Total : 11160.75 43.60 0.00 0.00 0.00 0.00 0.00 00:17:59.954 00:18:00.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.890 Nvme0n1 : 5.00 11189.20 43.71 0.00 0.00 0.00 0.00 0.00 00:18:00.890 =================================================================================================================== 00:18:00.891 Total : 11189.20 43.71 0.00 0.00 0.00 0.00 0.00 00:18:00.891 00:18:02.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.270 Nvme0n1 : 6.00 11229.33 43.86 0.00 0.00 0.00 0.00 0.00 00:18:02.270 =================================================================================================================== 00:18:02.270 Total : 11229.33 43.86 0.00 0.00 0.00 0.00 0.00 00:18:02.270 00:18:03.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.206 Nvme0n1 : 7.00 11230.86 43.87 0.00 0.00 0.00 0.00 0.00 00:18:03.206 =================================================================================================================== 00:18:03.206 Total : 11230.86 43.87 0.00 0.00 0.00 0.00 0.00 00:18:03.206 00:18:04.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.144 Nvme0n1 : 8.00 11271.75 44.03 0.00 0.00 0.00 0.00 0.00 00:18:04.144 =================================================================================================================== 00:18:04.144 Total : 11271.75 44.03 0.00 0.00 0.00 0.00 0.00 00:18:04.144 00:18:05.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.081 Nvme0n1 : 9.00 11268.33 44.02 0.00 0.00 0.00 0.00 0.00 00:18:05.081 =================================================================================================================== 00:18:05.081 Total : 11268.33 44.02 0.00 0.00 0.00 0.00 0.00 00:18:05.081 00:18:06.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.026 Nvme0n1 : 10.00 11297.20 44.13 0.00 0.00 0.00 0.00 0.00 00:18:06.026 =================================================================================================================== 00:18:06.026 Total : 11297.20 44.13 0.00 0.00 0.00 0.00 0.00 00:18:06.026 00:18:06.026 00:18:06.026 Latency(us) 00:18:06.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.026 Nvme0n1 : 10.00 11304.36 44.16 0.00 0.00 11316.07 6747.78 24369.68 00:18:06.026 =================================================================================================================== 00:18:06.026 Total : 11304.36 44.16 0.00 0.00 11316.07 6747.78 24369.68 00:18:06.026 0 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1058902 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1058902 ']' 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1058902 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1058902 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1058902' 00:18:06.026 killing process with pid 1058902 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1058902 00:18:06.026 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.026 00:18:06.026 Latency(us) 00:18:06.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.026 =================================================================================================================== 00:18:06.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.026 07:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1058902 00:18:06.989 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:07.245 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:07.812 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:07.812 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:07.812 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:07.812 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:07.812 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1056031 00:18:07.812 07:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1056031 00:18:07.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1056031 Killed "${NVMF_APP[@]}" "$@" 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1060495 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1060495 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1060495 ']' 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.812 07:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:08.071 [2024-07-15 07:44:59.120243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:08.071 [2024-07-15 07:44:59.120413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.071 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.071 [2024-07-15 07:44:59.260536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.331 [2024-07-15 07:44:59.503205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.331 [2024-07-15 07:44:59.503290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.331 [2024-07-15 07:44:59.503315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.331 [2024-07-15 07:44:59.503336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.331 [2024-07-15 07:44:59.503353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.331 [2024-07-15 07:44:59.503421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.897 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:09.156 [2024-07-15 07:45:00.377507] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:09.156 [2024-07-15 07:45:00.377737] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:09.156 [2024-07-15 07:45:00.377826] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 969e5cb1-ba44-42b5-a278-01b8967dfcca 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=969e5cb1-ba44-42b5-a278-01b8967dfcca 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:09.415 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:09.674 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 969e5cb1-ba44-42b5-a278-01b8967dfcca -t 2000 00:18:09.674 [ 00:18:09.674 { 00:18:09.674 "name": "969e5cb1-ba44-42b5-a278-01b8967dfcca", 00:18:09.674 "aliases": [ 00:18:09.674 "lvs/lvol" 00:18:09.674 ], 00:18:09.674 "product_name": "Logical Volume", 00:18:09.674 "block_size": 4096, 00:18:09.674 "num_blocks": 38912, 00:18:09.674 "uuid": "969e5cb1-ba44-42b5-a278-01b8967dfcca", 00:18:09.674 "assigned_rate_limits": { 00:18:09.674 "rw_ios_per_sec": 0, 00:18:09.674 "rw_mbytes_per_sec": 0, 00:18:09.674 "r_mbytes_per_sec": 0, 00:18:09.674 "w_mbytes_per_sec": 0 00:18:09.674 }, 00:18:09.674 "claimed": false, 00:18:09.674 "zoned": false, 00:18:09.674 "supported_io_types": { 00:18:09.674 "read": true, 00:18:09.674 "write": true, 00:18:09.674 "unmap": true, 00:18:09.674 "flush": false, 00:18:09.674 "reset": true, 00:18:09.674 "nvme_admin": false, 00:18:09.674 "nvme_io": false, 00:18:09.674 "nvme_io_md": false, 00:18:09.674 "write_zeroes": true, 00:18:09.674 "zcopy": false, 00:18:09.674 "get_zone_info": false, 00:18:09.674 "zone_management": false, 00:18:09.674 "zone_append": false, 00:18:09.674 "compare": false, 00:18:09.674 "compare_and_write": false, 00:18:09.674 "abort": false, 00:18:09.674 "seek_hole": true, 00:18:09.674 "seek_data": true, 00:18:09.674 "copy": false, 00:18:09.674 "nvme_iov_md": false 00:18:09.674 }, 00:18:09.674 "driver_specific": { 00:18:09.674 "lvol": { 00:18:09.674 "lvol_store_uuid": "b06416ca-b264-41c9-a355-55b15043df76", 00:18:09.674 "base_bdev": "aio_bdev", 00:18:09.674 "thin_provision": false, 00:18:09.674 "num_allocated_clusters": 38, 00:18:09.674 "snapshot": false, 00:18:09.674 "clone": false, 00:18:09.674 "esnap_clone": false 00:18:09.674 } 00:18:09.674 } 00:18:09.674 } 00:18:09.674 ] 00:18:09.674 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:09.674 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:09.674 07:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:09.932 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:09.932 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:09.932 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:10.498 [2024-07-15 07:45:01.666229] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:10.498 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:10.757 request: 00:18:10.757 { 00:18:10.757 "uuid": "b06416ca-b264-41c9-a355-55b15043df76", 00:18:10.757 "method": "bdev_lvol_get_lvstores", 00:18:10.757 "req_id": 1 00:18:10.757 } 00:18:10.757 Got JSON-RPC error response 00:18:10.757 response: 00:18:10.757 { 00:18:10.757 "code": -19, 00:18:10.757 "message": "No such device" 00:18:10.757 } 00:18:10.757 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:10.757 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.757 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.757 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.757 07:45:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:11.016 aio_bdev 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 969e5cb1-ba44-42b5-a278-01b8967dfcca 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=969e5cb1-ba44-42b5-a278-01b8967dfcca 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:11.016 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:11.275 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 969e5cb1-ba44-42b5-a278-01b8967dfcca -t 2000 00:18:11.533 [ 00:18:11.533 { 00:18:11.533 "name": "969e5cb1-ba44-42b5-a278-01b8967dfcca", 00:18:11.533 "aliases": [ 00:18:11.533 "lvs/lvol" 00:18:11.533 ], 00:18:11.533 "product_name": "Logical Volume", 00:18:11.533 "block_size": 4096, 00:18:11.533 "num_blocks": 38912, 00:18:11.533 "uuid": "969e5cb1-ba44-42b5-a278-01b8967dfcca", 00:18:11.533 "assigned_rate_limits": { 00:18:11.533 "rw_ios_per_sec": 0, 00:18:11.533 "rw_mbytes_per_sec": 0, 00:18:11.533 "r_mbytes_per_sec": 0, 00:18:11.533 "w_mbytes_per_sec": 0 00:18:11.533 }, 00:18:11.533 "claimed": false, 00:18:11.533 "zoned": false, 00:18:11.533 "supported_io_types": { 00:18:11.533 "read": true, 00:18:11.533 "write": true, 00:18:11.533 "unmap": true, 00:18:11.533 "flush": false, 00:18:11.533 "reset": true, 00:18:11.533 "nvme_admin": false, 00:18:11.533 "nvme_io": false, 00:18:11.533 "nvme_io_md": false, 00:18:11.533 "write_zeroes": true, 00:18:11.533 "zcopy": false, 00:18:11.533 "get_zone_info": false, 00:18:11.533 "zone_management": false, 00:18:11.533 "zone_append": false, 00:18:11.533 "compare": false, 00:18:11.533 "compare_and_write": false, 00:18:11.533 "abort": false, 00:18:11.533 "seek_hole": true, 00:18:11.533 "seek_data": true, 00:18:11.533 "copy": false, 00:18:11.533 "nvme_iov_md": false 00:18:11.533 }, 00:18:11.533 "driver_specific": { 00:18:11.533 "lvol": { 00:18:11.533 "lvol_store_uuid": "b06416ca-b264-41c9-a355-55b15043df76", 00:18:11.533 "base_bdev": "aio_bdev", 00:18:11.533 "thin_provision": false, 00:18:11.533 "num_allocated_clusters": 38, 00:18:11.533 "snapshot": false, 00:18:11.533 "clone": false, 00:18:11.533 "esnap_clone": false 00:18:11.533 } 00:18:11.533 } 00:18:11.533 } 00:18:11.533 ] 00:18:11.533 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:11.533 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:11.533 07:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:11.790 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:11.790 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b06416ca-b264-41c9-a355-55b15043df76 00:18:11.790 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:12.048 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:12.048 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 969e5cb1-ba44-42b5-a278-01b8967dfcca 00:18:12.613 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b06416ca-b264-41c9-a355-55b15043df76 00:18:12.613 07:45:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:12.872 07:45:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:12.872 00:18:12.872 real 0m21.341s 00:18:12.872 user 0m54.288s 00:18:12.872 sys 0m4.632s 00:18:12.872 07:45:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:12.872 07:45:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:12.872 ************************************ 00:18:12.872 END TEST lvs_grow_dirty 00:18:12.872 ************************************ 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:13.129 nvmf_trace.0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.129 rmmod nvme_tcp 00:18:13.129 rmmod nvme_fabrics 00:18:13.129 rmmod nvme_keyring 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1060495 ']' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1060495 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1060495 ']' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1060495 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060495 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060495' 00:18:13.129 killing process with pid 1060495 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1060495 00:18:13.129 07:45:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1060495 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.507 07:45:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.413 07:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.413 00:18:16.413 real 0m47.715s 00:18:16.413 user 1m20.869s 00:18:16.413 sys 0m8.672s 00:18:16.413 07:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:16.413 07:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:16.413 ************************************ 00:18:16.413 END TEST nvmf_lvs_grow 00:18:16.413 ************************************ 00:18:16.413 07:45:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:16.413 07:45:07 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:16.413 07:45:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:16.413 07:45:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.413 07:45:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.413 ************************************ 00:18:16.413 START TEST nvmf_bdev_io_wait 00:18:16.413 ************************************ 00:18:16.413 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:16.671 * Looking for test storage... 00:18:16.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.671 07:45:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:18.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:18.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:18.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:18.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.570 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:18:18.571 00:18:18.571 --- 10.0.0.2 ping statistics --- 00:18:18.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.571 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:18:18.571 00:18:18.571 --- 10.0.0.1 ping statistics --- 00:18:18.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.571 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1063773 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1063773 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1063773 ']' 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.571 07:45:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.829 [2024-07-15 07:45:09.869581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.829 [2024-07-15 07:45:09.869726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.829 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.829 [2024-07-15 07:45:10.016235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.088 [2024-07-15 07:45:10.278329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.088 [2024-07-15 07:45:10.278416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.088 [2024-07-15 07:45:10.278444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.088 [2024-07-15 07:45:10.278466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.088 [2024-07-15 07:45:10.278488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.088 [2024-07-15 07:45:10.278621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.088 [2024-07-15 07:45:10.278695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.088 [2024-07-15 07:45:10.278787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.088 [2024-07-15 07:45:10.278796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.655 07:45:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:19.913 [2024-07-15 07:45:11.111334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.913 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 Malloc0 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 [2024-07-15 07:45:11.231239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1064051 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1064053 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.172 { 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme$subsystem", 00:18:20.172 "trtype": "$TEST_TRANSPORT", 00:18:20.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "$NVMF_PORT", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.172 "hdgst": ${hdgst:-false}, 00:18:20.172 "ddgst": ${ddgst:-false} 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 } 00:18:20.172 EOF 00:18:20.172 )") 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1064055 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.172 { 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme$subsystem", 00:18:20.172 "trtype": "$TEST_TRANSPORT", 00:18:20.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "$NVMF_PORT", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.172 "hdgst": ${hdgst:-false}, 00:18:20.172 "ddgst": ${ddgst:-false} 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 } 00:18:20.172 EOF 00:18:20.172 )") 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1064058 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.172 { 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme$subsystem", 00:18:20.172 "trtype": "$TEST_TRANSPORT", 00:18:20.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "$NVMF_PORT", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.172 "hdgst": ${hdgst:-false}, 00:18:20.172 "ddgst": ${ddgst:-false} 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 } 00:18:20.172 EOF 00:18:20.172 )") 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.172 { 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme$subsystem", 00:18:20.172 "trtype": "$TEST_TRANSPORT", 00:18:20.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "$NVMF_PORT", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.172 "hdgst": ${hdgst:-false}, 00:18:20.172 "ddgst": ${ddgst:-false} 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 } 00:18:20.172 EOF 00:18:20.172 )") 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1064051 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme1", 00:18:20.172 "trtype": "tcp", 00:18:20.172 "traddr": "10.0.0.2", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "4420", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.172 "hdgst": false, 00:18:20.172 "ddgst": false 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 }' 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme1", 00:18:20.172 "trtype": "tcp", 00:18:20.172 "traddr": "10.0.0.2", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "4420", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.172 "hdgst": false, 00:18:20.172 "ddgst": false 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 }' 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme1", 00:18:20.172 "trtype": "tcp", 00:18:20.172 "traddr": "10.0.0.2", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "4420", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.172 "hdgst": false, 00:18:20.172 "ddgst": false 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 }' 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:20.172 07:45:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.172 "params": { 00:18:20.172 "name": "Nvme1", 00:18:20.172 "trtype": "tcp", 00:18:20.172 "traddr": "10.0.0.2", 00:18:20.172 "adrfam": "ipv4", 00:18:20.172 "trsvcid": "4420", 00:18:20.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.172 "hdgst": false, 00:18:20.172 "ddgst": false 00:18:20.172 }, 00:18:20.172 "method": "bdev_nvme_attach_controller" 00:18:20.172 }' 00:18:20.172 [2024-07-15 07:45:11.315020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.172 [2024-07-15 07:45:11.315022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.172 [2024-07-15 07:45:11.315192] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:20.172 [2024-07-15 07:45:11.315210] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:20.172 [2024-07-15 07:45:11.317673] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.172 [2024-07-15 07:45:11.317674] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.172 [2024-07-15 07:45:11.317816] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 [2024-07-15 07:45:11.317819] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:18:20.172 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:20.432 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.432 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.432 [2024-07-15 07:45:11.556712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.432 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.720 [2024-07-15 07:45:11.665940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.720 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.720 [2024-07-15 07:45:11.742349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.720 [2024-07-15 07:45:11.784268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:20.720 [2024-07-15 07:45:11.820944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.720 [2024-07-15 07:45:11.894249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.979 [2024-07-15 07:45:11.957186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:20.979 [2024-07-15 07:45:12.036213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:21.239 Running I/O for 1 seconds... 00:18:21.239 Running I/O for 1 seconds... 00:18:21.239 Running I/O for 1 seconds... 00:18:21.239 Running I/O for 1 seconds... 00:18:22.174 00:18:22.174 Latency(us) 00:18:22.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.174 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:22.174 Nvme1n1 : 1.01 8057.19 31.47 0.00 0.00 15797.98 6747.78 20971.52 00:18:22.174 =================================================================================================================== 00:18:22.174 Total : 8057.19 31.47 0.00 0.00 15797.98 6747.78 20971.52 00:18:22.174 00:18:22.174 Latency(us) 00:18:22.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.174 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:22.174 Nvme1n1 : 1.01 5583.16 21.81 0.00 0.00 22769.92 4878.79 33010.73 00:18:22.174 =================================================================================================================== 00:18:22.174 Total : 5583.16 21.81 0.00 0.00 22769.92 4878.79 33010.73 00:18:22.174 00:18:22.174 Latency(us) 00:18:22.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.174 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:22.174 Nvme1n1 : 1.01 7211.53 28.17 0.00 0.00 17658.80 8738.13 33010.73 00:18:22.174 =================================================================================================================== 00:18:22.174 Total : 7211.53 28.17 0.00 0.00 17658.80 8738.13 33010.73 00:18:22.433 00:18:22.433 Latency(us) 00:18:22.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.433 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:22.433 Nvme1n1 : 1.00 156392.82 610.91 0.00 0.00 815.47 333.75 1049.79 00:18:22.433 =================================================================================================================== 00:18:22.433 Total : 156392.82 610.91 0.00 0.00 815.47 333.75 1049.79 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1064053 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1064055 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1064058 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.368 rmmod nvme_tcp 00:18:23.368 rmmod nvme_fabrics 00:18:23.368 rmmod nvme_keyring 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1063773 ']' 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1063773 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1063773 ']' 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1063773 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1063773 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1063773' 00:18:23.368 killing process with pid 1063773 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1063773 00:18:23.368 07:45:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1063773 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.742 07:45:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.649 07:45:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.649 00:18:26.649 real 0m10.179s 00:18:26.649 user 0m30.422s 00:18:26.649 sys 0m4.251s 00:18:26.649 07:45:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.649 07:45:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.649 ************************************ 00:18:26.649 END TEST nvmf_bdev_io_wait 00:18:26.649 ************************************ 00:18:26.649 07:45:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:26.649 07:45:17 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:26.649 07:45:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:26.649 07:45:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.649 07:45:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.649 ************************************ 00:18:26.649 START TEST nvmf_queue_depth 00:18:26.649 ************************************ 00:18:26.649 07:45:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:26.906 * Looking for test storage... 00:18:26.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.906 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.907 07:45:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.810 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.810 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:18:28.811 00:18:28.811 --- 10.0.0.2 ping statistics --- 00:18:28.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.811 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:28.811 00:18:28.811 --- 10.0.0.1 ping statistics --- 00:18:28.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.811 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.811 07:45:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1066419 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1066419 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1066419 ']' 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.811 07:45:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:29.069 [2024-07-15 07:45:20.104931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:29.069 [2024-07-15 07:45:20.105068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.069 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.069 [2024-07-15 07:45:20.241243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.329 [2024-07-15 07:45:20.493555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.329 [2024-07-15 07:45:20.493620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.329 [2024-07-15 07:45:20.493643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.329 [2024-07-15 07:45:20.493663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.329 [2024-07-15 07:45:20.493696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.329 [2024-07-15 07:45:20.493738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:29.895 [2024-07-15 07:45:21.049161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:29.895 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.896 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 Malloc0 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 [2024-07-15 07:45:21.170044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1066582 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1066582 /var/tmp/bdevperf.sock 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1066582 ']' 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.154 07:45:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:30.154 [2024-07-15 07:45:21.256731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:30.154 [2024-07-15 07:45:21.256941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066582 ] 00:18:30.154 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.414 [2024-07-15 07:45:21.395412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.673 [2024-07-15 07:45:21.647545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.240 07:45:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.240 07:45:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:31.241 07:45:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:31.241 07:45:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.241 07:45:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:31.241 NVMe0n1 00:18:31.241 07:45:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.241 07:45:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.499 Running I/O for 10 seconds... 00:18:41.503 00:18:41.503 Latency(us) 00:18:41.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.503 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:41.503 Verification LBA range: start 0x0 length 0x4000 00:18:41.503 NVMe0n1 : 10.10 6084.67 23.77 0.00 0.00 167383.95 12136.30 103304.15 00:18:41.503 =================================================================================================================== 00:18:41.503 Total : 6084.67 23.77 0.00 0.00 167383.95 12136.30 103304.15 00:18:41.503 0 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1066582 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1066582 ']' 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1066582 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1066582 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1066582' 00:18:41.503 killing process with pid 1066582 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1066582 00:18:41.503 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.503 00:18:41.503 Latency(us) 00:18:41.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.503 =================================================================================================================== 00:18:41.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.503 07:45:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1066582 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.919 rmmod nvme_tcp 00:18:42.919 rmmod nvme_fabrics 00:18:42.919 rmmod nvme_keyring 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1066419 ']' 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1066419 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1066419 ']' 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1066419 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1066419 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1066419' 00:18:42.919 killing process with pid 1066419 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1066419 00:18:42.919 07:45:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1066419 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.295 07:45:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.831 07:45:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.831 00:18:46.831 real 0m19.573s 00:18:46.831 user 0m28.106s 00:18:46.831 sys 0m3.132s 00:18:46.831 07:45:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.831 07:45:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.831 ************************************ 00:18:46.831 END TEST nvmf_queue_depth 00:18:46.831 ************************************ 00:18:46.831 07:45:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:46.831 07:45:37 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:46.831 07:45:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:46.831 07:45:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.831 07:45:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.831 ************************************ 00:18:46.831 START TEST nvmf_target_multipath 00:18:46.831 ************************************ 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:46.831 * Looking for test storage... 00:18:46.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.831 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.832 07:45:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:48.735 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:18:48.736 00:18:48.736 --- 10.0.0.2 ping statistics --- 00:18:48.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.736 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:18:48.736 00:18:48.736 --- 10.0.0.1 ping statistics --- 00:18:48.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.736 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:48.736 only one NIC for nvmf test 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.736 rmmod nvme_tcp 00:18:48.736 rmmod nvme_fabrics 00:18:48.736 rmmod nvme_keyring 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.736 07:45:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.645 00:18:50.645 real 0m4.236s 00:18:50.645 user 0m0.795s 00:18:50.645 sys 0m1.438s 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.645 07:45:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.645 ************************************ 00:18:50.645 END TEST nvmf_target_multipath 00:18:50.645 ************************************ 00:18:50.645 07:45:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:50.645 07:45:41 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:50.645 07:45:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:50.645 07:45:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.645 07:45:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.645 ************************************ 00:18:50.645 START TEST nvmf_zcopy 00:18:50.645 ************************************ 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:50.645 * Looking for test storage... 00:18:50.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.645 07:45:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:52.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:52.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.548 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:52.549 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:52.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.549 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:18:52.806 00:18:52.806 --- 10.0.0.2 ping statistics --- 00:18:52.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.806 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:18:52.806 00:18:52.806 --- 10.0.0.1 ping statistics --- 00:18:52.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.806 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:52.806 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1072006 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1072006 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1072006 ']' 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.807 07:45:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.807 [2024-07-15 07:45:44.003749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:52.807 [2024-07-15 07:45:44.003905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.066 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.066 [2024-07-15 07:45:44.148664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.326 [2024-07-15 07:45:44.405685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.326 [2024-07-15 07:45:44.405760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.326 [2024-07-15 07:45:44.405786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.326 [2024-07-15 07:45:44.405808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.326 [2024-07-15 07:45:44.405827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.326 [2024-07-15 07:45:44.405871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 [2024-07-15 07:45:44.984244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.893 07:45:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 [2024-07-15 07:45:45.000452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 malloc0 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.893 { 00:18:53.893 "params": { 00:18:53.893 "name": "Nvme$subsystem", 00:18:53.893 "trtype": "$TEST_TRANSPORT", 00:18:53.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.893 "adrfam": "ipv4", 00:18:53.893 "trsvcid": "$NVMF_PORT", 00:18:53.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.893 "hdgst": ${hdgst:-false}, 00:18:53.893 "ddgst": ${ddgst:-false} 00:18:53.893 }, 00:18:53.893 "method": "bdev_nvme_attach_controller" 00:18:53.893 } 00:18:53.893 EOF 00:18:53.893 )") 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:53.893 07:45:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:53.893 "params": { 00:18:53.893 "name": "Nvme1", 00:18:53.893 "trtype": "tcp", 00:18:53.893 "traddr": "10.0.0.2", 00:18:53.893 "adrfam": "ipv4", 00:18:53.893 "trsvcid": "4420", 00:18:53.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.893 "hdgst": false, 00:18:53.893 "ddgst": false 00:18:53.893 }, 00:18:53.893 "method": "bdev_nvme_attach_controller" 00:18:53.893 }' 00:18:54.153 [2024-07-15 07:45:45.161631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:54.153 [2024-07-15 07:45:45.161757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072163 ] 00:18:54.153 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.153 [2024-07-15 07:45:45.299721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.412 [2024-07-15 07:45:45.553448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.982 Running I/O for 10 seconds... 00:19:07.203 00:19:07.203 Latency(us) 00:19:07.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.203 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:07.203 Verification LBA range: start 0x0 length 0x1000 00:19:07.203 Nvme1n1 : 10.06 4356.93 34.04 0.00 0.00 29191.32 4878.79 45632.47 00:19:07.203 =================================================================================================================== 00:19:07.203 Total : 4356.93 34.04 0.00 0.00 29191.32 4878.79 45632.47 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1073604 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:07.203 { 00:19:07.203 "params": { 00:19:07.203 "name": "Nvme$subsystem", 00:19:07.203 "trtype": "$TEST_TRANSPORT", 00:19:07.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.203 "adrfam": "ipv4", 00:19:07.203 "trsvcid": "$NVMF_PORT", 00:19:07.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.203 "hdgst": ${hdgst:-false}, 00:19:07.203 "ddgst": ${ddgst:-false} 00:19:07.203 }, 00:19:07.203 "method": "bdev_nvme_attach_controller" 00:19:07.203 } 00:19:07.203 EOF 00:19:07.203 )") 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:07.203 [2024-07-15 07:45:57.199889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.199968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:07.203 07:45:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:07.203 "params": { 00:19:07.203 "name": "Nvme1", 00:19:07.203 "trtype": "tcp", 00:19:07.203 "traddr": "10.0.0.2", 00:19:07.203 "adrfam": "ipv4", 00:19:07.203 "trsvcid": "4420", 00:19:07.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.203 "hdgst": false, 00:19:07.203 "ddgst": false 00:19:07.203 }, 00:19:07.203 "method": "bdev_nvme_attach_controller" 00:19:07.203 }' 00:19:07.203 [2024-07-15 07:45:57.207770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.207807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.215786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.215818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.223797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.223830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.231812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.231846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.239844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.239898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.247846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.247900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.255851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.255905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.263942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.263970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.271927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.271956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.274060] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:07.203 [2024-07-15 07:45:57.274193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073604 ] 00:19:07.203 [2024-07-15 07:45:57.279982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.280011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.287995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.288023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.296011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.296041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.304044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.304075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.312055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.312086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.320055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.320083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.328092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.328120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.336092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.336119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.344133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.344176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.203 [2024-07-15 07:45:57.352171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.352199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.360237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.360270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.368233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.368266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.376248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.376294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.384263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.384296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.203 [2024-07-15 07:45:57.392311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.203 [2024-07-15 07:45:57.392343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.400311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.400343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.408359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.408392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.411508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.204 [2024-07-15 07:45:57.416365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.416399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.424462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.424516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.432473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.432515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.440450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.440483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.448455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.448487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.456514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.456547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.464502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.464536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.472543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.472576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.480569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.480603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.488576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.488609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.496611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.496644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.504637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.504669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.512640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.512673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.520679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.520711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.528683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.528717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.536721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.536754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.544767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.544806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.552863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.552949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.560795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.560828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.568818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.568852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.576823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.576856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.584867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.584923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.592870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.592925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.600933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.600960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.608954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.608982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.616956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.616983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.624994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.625043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.633011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.633039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.641010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.641038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.649062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.649090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.657050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.657078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.665080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.665109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.667871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.204 [2024-07-15 07:45:57.673101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.673129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.681115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.681145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.689262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.689314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.697255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.697307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.705197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.705242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.713248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.713282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.721270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.721305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.729296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.729331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.737320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.737353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.745398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.745431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.753374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.753408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.761474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.761528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.769476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.769533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.777540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.777597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.785524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.785581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.793504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.793537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.801520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.801553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.809523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.809556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.817554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.817587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.825580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.825612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.833583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.833615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.841643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.841676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.849628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.204 [2024-07-15 07:45:57.849660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.204 [2024-07-15 07:45:57.857669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.857702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.865694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.865727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.873702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.873734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.881745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.881778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.889759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.889804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.897768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.897801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.905813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.905845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.913885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.913949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.921963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.922015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.929989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.930041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.937959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.937987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.945964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.945992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.953981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.954009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.961986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.962014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.970029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.970057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.978021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.978050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.986043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.986071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:57.994063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:57.994090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.002075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.002110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.010108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.010136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.018134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.018179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.026146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.026188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.034225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.034258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.042299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.042337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.050268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.050305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.058342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.058380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.066296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.066332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.074340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.074375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.082364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.082400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.090365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.090399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.098417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.098452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.106423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.106456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.114473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.114505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.122507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.122543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.130531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.130564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.138551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.138584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.146564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.146597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.154564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.154600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.162632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.162669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.170621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.170655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.178644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.178673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.186667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.186695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.194671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.194699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.202758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.202788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.210721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.210791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.220070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.220104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.226779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.226809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.234791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.234820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 Running I/O for 5 seconds... 00:19:07.205 [2024-07-15 07:45:58.251529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.251570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.265966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.266003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.280147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.280184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.294779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.294816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.308792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.308830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.323281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.323317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.337536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.337572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.351962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.351998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.365991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.366034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.380308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.380344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.205 [2024-07-15 07:45:58.394579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.205 [2024-07-15 07:45:58.394615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-07-15 07:45:58.409175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-07-15 07:45:58.409211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.206 [2024-07-15 07:45:58.423186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.206 [2024-07-15 07:45:58.423221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.437039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.437075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.450973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.451010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.465087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.465124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.479372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.479408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.493386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.493422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.507409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.507459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.521391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.521442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.535687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.535724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.550070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.550106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.563976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.564012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.578529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.578566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.592578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.592614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.606454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.606505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.620970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.621007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.634983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.635019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.649236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.649272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.502 [2024-07-15 07:45:58.663647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.502 [2024-07-15 07:45:58.663682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.503 [2024-07-15 07:45:58.678259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.503 [2024-07-15 07:45:58.678296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.503 [2024-07-15 07:45:58.692226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.503 [2024-07-15 07:45:58.692264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.503 [2024-07-15 07:45:58.705934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.503 [2024-07-15 07:45:58.705969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.719061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.719098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.732799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.732836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.747247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.747283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.761249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.761286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.775054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.775089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.788944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.788980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.802413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.802464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.816969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.817006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.830816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.830851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.844446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.844483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.858364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.858401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.871768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.871805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.885403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.885455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.899344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.899395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.913665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.913705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.928852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.928903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.944522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.944561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.956752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.956793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.971359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.971401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.987132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.987189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.782 [2024-07-15 07:45:58.999440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.782 [2024-07-15 07:45:58.999479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.041 [2024-07-15 07:45:59.014428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.041 [2024-07-15 07:45:59.014470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.041 [2024-07-15 07:45:59.029744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.041 [2024-07-15 07:45:59.029784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.041 [2024-07-15 07:45:59.045032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.041 [2024-07-15 07:45:59.045082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.060083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.060119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.075530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.075569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.091055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.091091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.104385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.104425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.119732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.119771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.135236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.135276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.147392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.147432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.162169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.162222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.177404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.177444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.192488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.192527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.207356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.207396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.222836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.222885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.238089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.238123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.254005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.254040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.042 [2024-07-15 07:45:59.269408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.042 [2024-07-15 07:45:59.269448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.300 [2024-07-15 07:45:59.284363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.284403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.300115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.300167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.315891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.315941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.331397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.331436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.347125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.347159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.363025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.363059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.378752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.378792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.394452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.394491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.409965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.410000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.425669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.425708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.441236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.441276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.456502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.456542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.472038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.472074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.487060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.487111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.502073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.502109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.301 [2024-07-15 07:45:59.516967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.301 [2024-07-15 07:45:59.517017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.560 [2024-07-15 07:45:59.531872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.560 [2024-07-15 07:45:59.531938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.560 [2024-07-15 07:45:59.547234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.547274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.562263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.562304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.578759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.578798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.594399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.594438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.609622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.609661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.624591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.624630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.639499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.639538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.654361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.654401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.670070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.670106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.685160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.685196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.700502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.700542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.715045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.715081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.730470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.730510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.745409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.745459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.761034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.761085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.773940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.773977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.561 [2024-07-15 07:45:59.789041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.561 [2024-07-15 07:45:59.789076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.804013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.804049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.819531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.819570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.834749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.834788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.850076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.850112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.865526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.865566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.880619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.880658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.895623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.895662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.911224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.911264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.923292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.923333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.938329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.938370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.953077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.953113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.967408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.967447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.983300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.983341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:45:59.998293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:45:59.998334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:46:00.013510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:46:00.013556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:46:00.030531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:46:00.030600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.820 [2024-07-15 07:46:00.047510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.820 [2024-07-15 07:46:00.047569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.064421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.064464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.081017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.081058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.095776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.095812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.110869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.110933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.126456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.126495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.142276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.142316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.077 [2024-07-15 07:46:00.156134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.077 [2024-07-15 07:46:00.156171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.171137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.171199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.186481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.186521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.202415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.202455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.217682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.217722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.233012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.233048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.246330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.246370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.262201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.262240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.277763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.277802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.078 [2024-07-15 07:46:00.293395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.078 [2024-07-15 07:46:00.293435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.307638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.335 [2024-07-15 07:46:00.307677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.323031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.335 [2024-07-15 07:46:00.323074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.338021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.335 [2024-07-15 07:46:00.338056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.353851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.335 [2024-07-15 07:46:00.353907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.368928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.335 [2024-07-15 07:46:00.368964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.384054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.335 [2024-07-15 07:46:00.384090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.335 [2024-07-15 07:46:00.398993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.399029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.413789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.413828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.429474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.429514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.444478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.444518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.460576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.460615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.475811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.475850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.488987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.489023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.503982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.504033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.519145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.519180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.531854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.531906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.547338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.547377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.336 [2024-07-15 07:46:00.562833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.336 [2024-07-15 07:46:00.562874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.577788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.577828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.592226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.592266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.607694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.607744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.623692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.623731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.639614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.639653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.655078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.655113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.667382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.667422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.681159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.681194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.696056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.696091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.711403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.711442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.727019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.727055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.742258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.742297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.757709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.757749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.773389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.773428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.788179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.788218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.803777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.803817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.595 [2024-07-15 07:46:00.817281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.595 [2024-07-15 07:46:00.817320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.832611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.832652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.847675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.847715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.860069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.860104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.874504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.874543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.889527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.889566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.904571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.904610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.919946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.919981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.935627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.935667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.950494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.950533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.965758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.965797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.978716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.978756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:00.994242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:00.994283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:01.010870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:01.010942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:01.027314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:01.027354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:01.043002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:01.043037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:01.058763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:01.058802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.856 [2024-07-15 07:46:01.074376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.856 [2024-07-15 07:46:01.074417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.116 [2024-07-15 07:46:01.089714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.116 [2024-07-15 07:46:01.089754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.116 [2024-07-15 07:46:01.105703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.116 [2024-07-15 07:46:01.105743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.116 [2024-07-15 07:46:01.121098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.116 [2024-07-15 07:46:01.121135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.137222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.137262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.152151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.152204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.168319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.168359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.183706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.183745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.199020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.199056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.214655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.214695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.229243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.229282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.244044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.244081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.259481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.259520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.272676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.272716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.288013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.288049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.303205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.303259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.318270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.318311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.117 [2024-07-15 07:46:01.334273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.117 [2024-07-15 07:46:01.334312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.349546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.349586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.364981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.365017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.377828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.377867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.392015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.392055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.406805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.406845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.422027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.422063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.437458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.437497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.452552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.452591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.467872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.467921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.483584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.483623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.498621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.498661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.513801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.513839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.528847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.528896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.544255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.544296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.560096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.560132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.575737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.575776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.376 [2024-07-15 07:46:01.591051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.376 [2024-07-15 07:46:01.591087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.606279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.606317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.620242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.620279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.634052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.634087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.647632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.647668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.661358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.661393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.675059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.675094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.688717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.688753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.702884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.702919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.716656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.716693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.730343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.730379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.744004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.744040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.757999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.758035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.772222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.772259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.786509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.786546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.799896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.799940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.814209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.814245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.828649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.828686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.843534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.843571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.636 [2024-07-15 07:46:01.857317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.636 [2024-07-15 07:46:01.857353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.871565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.871602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.886460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.886497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.900937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.900974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.915522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.915559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.929898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.929934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.944126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.944162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.957448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.957485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.971769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.971805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:01.985936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:01.985972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:02.000111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:02.000168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.895 [2024-07-15 07:46:02.014505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.895 [2024-07-15 07:46:02.014542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.029147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.029185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.043570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.043606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.057757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.057793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.072369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.072405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.086572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.086609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.101122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.101158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.896 [2024-07-15 07:46:02.114846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.896 [2024-07-15 07:46:02.114889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.129043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.129080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.143077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.143112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.156958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.156994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.170667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.170703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.184735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.184771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.199232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.199268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.213333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.213370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.227835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.227871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.242098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.242134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.255773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.255819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.270025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.270070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.285903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.285968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.302059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.302094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.317625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.317665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.330792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.330831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.345011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.345048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.359792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.359833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.154 [2024-07-15 07:46:02.374419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.154 [2024-07-15 07:46:02.374459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.389993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.390030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.405276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.405313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.419894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.419945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.435059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.435094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.450511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.450550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.466430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.466470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.482495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.482534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.498097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.498132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.513515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.513554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.528495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.528534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.543855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.543904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.559233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.559283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.573869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.573920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.589444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.589483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.604605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.604645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.619760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.619799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.414 [2024-07-15 07:46:02.635088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.414 [2024-07-15 07:46:02.635125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.650543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.650583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.663789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.663827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.678944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.678980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.693778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.693817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.709189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.709228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.723744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.723783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.738825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.738863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.754073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.754109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.769096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.769132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.784318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.784358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.798851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.798914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.814250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.814290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.828983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.829018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.844287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.844338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.859539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.859579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.874810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.874847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.672 [2024-07-15 07:46:02.888571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.672 [2024-07-15 07:46:02.888610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.903818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.903859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.918609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.918648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.933897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.933948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.949401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.949439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.964508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.964548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.979536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.979576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:02.994990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:02.995026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.010665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.010707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.025960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.026005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.040642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.040681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.055942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.055977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.071039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.071091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.086596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.086636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.101078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.101115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.117270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.117309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.132612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.132650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.931 [2024-07-15 07:46:03.147985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.931 [2024-07-15 07:46:03.148020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.162730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.162771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.178612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.178650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.193859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.193922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.209047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.209082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.224203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.224253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.239451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.239490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.254572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.254611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 [2024-07-15 07:46:03.262089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.188 [2024-07-15 07:46:03.262122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.188 00:19:12.188 Latency(us) 00:19:12.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.188 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:12.188 Nvme1n1 : 5.01 8482.46 66.27 0.00 0.00 15062.94 5388.52 25826.04 00:19:12.188 =================================================================================================================== 00:19:12.189 Total : 8482.46 66.27 0.00 0.00 15062.94 5388.52 25826.04 00:19:12.189 [2024-07-15 07:46:03.269474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.269510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.277504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.277541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.285486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.285522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.293526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.293561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.301498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.301526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.309599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.309640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.317743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.317805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.325742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.325821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.333629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.333657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.341628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.341656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.349632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.349658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.357685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.357712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.365676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.365705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.373712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.373740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.381741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.381768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.389784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.389824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.397925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.397984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.405966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.406027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.189 [2024-07-15 07:46:03.413812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.189 [2024-07-15 07:46:03.413854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.421889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.421944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.429890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.429921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.437982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.438011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.445972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.446001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.453940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.453969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.461991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.462019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.470016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.470045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.478005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.478033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.486036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.486065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.494039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.494067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.502081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.502110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.510105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.510134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.518132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.518175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.526167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.526195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.534232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.534276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.542333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.542399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.550287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.550316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.558249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.558276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.566268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.566294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.574285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.574311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.582292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.582319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.590353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.590380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.598542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.598610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.606520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.606588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.614584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.614650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.622474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.622522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.447 [2024-07-15 07:46:03.630464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.447 [2024-07-15 07:46:03.630491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.448 [2024-07-15 07:46:03.638469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.448 [2024-07-15 07:46:03.638497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.448 [2024-07-15 07:46:03.646473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.448 [2024-07-15 07:46:03.646499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.448 [2024-07-15 07:46:03.654511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.448 [2024-07-15 07:46:03.654538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.448 [2024-07-15 07:46:03.662531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.448 [2024-07-15 07:46:03.662558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.448 [2024-07-15 07:46:03.670542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.448 [2024-07-15 07:46:03.670569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.678594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.678622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.686584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.686611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.694629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.694655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.702653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.702680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.710672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.710699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.718686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.718712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.726708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.726735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.734722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.734748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.742756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.742783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.706 [2024-07-15 07:46:03.750759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.706 [2024-07-15 07:46:03.750785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.758813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.758842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.766981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.767054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.774977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.775035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.782930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.782959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.790943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.790972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.798929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.798957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.806988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.807016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.814993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.815023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.823032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.823061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.831027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.831056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.839040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.839069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.847087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.847117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.855104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.855134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.863102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.863131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.871148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.871193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.879217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.879267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.887350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.887422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.895245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.895272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.903279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.903306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.911280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.911306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.919310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.919347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.927305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.927331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.707 [2024-07-15 07:46:03.935340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.707 [2024-07-15 07:46:03.935367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.943344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.943371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.951378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.951404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.959414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.959440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.967503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.967557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.975578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.975649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.983466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.983492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.991479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.991506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:03.999547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:03.999573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.007515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.007541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.015559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.015587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.023580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.023608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.031585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.031612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.039628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.039656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.047676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.047709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.055675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.055709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.063723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.063756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.071811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.071886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.079798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.079833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.087786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.087818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.095821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.095853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.103847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.103890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.111854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.111899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.119926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.119975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.127991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.965 [2024-07-15 07:46:04.128037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.965 [2024-07-15 07:46:04.135938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.135967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.143963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.143992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.151994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.152021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.159994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.160022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.168026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.168053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.176053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.176081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.184044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.184072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.966 [2024-07-15 07:46:04.192114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.966 [2024-07-15 07:46:04.192144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.200090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.200119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.208134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.208162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.216149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.216193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.224277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.224318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.232211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.232258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.240247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.240280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.248275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.248311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.256318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.256352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.264314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.264348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 [2024-07-15 07:46:04.272349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.224 [2024-07-15 07:46:04.272381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1073604) - No such process 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1073604 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:13.224 delay0 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.224 07:46:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:13.224 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.224 [2024-07-15 07:46:04.442614] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:19.788 Initializing NVMe Controllers 00:19:19.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.788 Initialization complete. Launching workers. 00:19:19.788 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 72 00:19:19.788 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 359, failed to submit 33 00:19:19.788 success 166, unsuccess 193, failed 0 00:19:19.788 07:46:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:19.788 07:46:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:19.788 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.788 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:19.788 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.789 rmmod nvme_tcp 00:19:19.789 rmmod nvme_fabrics 00:19:19.789 rmmod nvme_keyring 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1072006 ']' 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1072006 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1072006 ']' 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1072006 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072006 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072006' 00:19:19.789 killing process with pid 1072006 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1072006 00:19:19.789 07:46:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1072006 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.166 07:46:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.091 07:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.091 00:19:23.091 real 0m32.485s 00:19:23.091 user 0m49.067s 00:19:23.091 sys 0m8.119s 00:19:23.091 07:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.091 07:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:23.091 ************************************ 00:19:23.091 END TEST nvmf_zcopy 00:19:23.091 ************************************ 00:19:23.091 07:46:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:23.091 07:46:14 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:23.091 07:46:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:23.091 07:46:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.091 07:46:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:23.091 ************************************ 00:19:23.091 START TEST nvmf_nmic 00:19:23.091 ************************************ 00:19:23.091 07:46:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:23.349 * Looking for test storage... 00:19:23.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:23.349 07:46:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.254 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:25.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:25.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:25.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:25.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.255 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:25.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:19:25.546 00:19:25.546 --- 10.0.0.2 ping statistics --- 00:19:25.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.546 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:19:25.546 00:19:25.546 --- 10.0.0.1 ping statistics --- 00:19:25.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.546 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1077239 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1077239 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1077239 ']' 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.546 07:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.546 [2024-07-15 07:46:16.606159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:25.546 [2024-07-15 07:46:16.606304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.546 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.546 [2024-07-15 07:46:16.748041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.803 [2024-07-15 07:46:17.009059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.803 [2024-07-15 07:46:17.009134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.803 [2024-07-15 07:46:17.009162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.803 [2024-07-15 07:46:17.009183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.803 [2024-07-15 07:46:17.009204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.803 [2024-07-15 07:46:17.009344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.803 [2024-07-15 07:46:17.009392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.803 [2024-07-15 07:46:17.009432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.803 [2024-07-15 07:46:17.009445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.371 [2024-07-15 07:46:17.526251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.371 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 Malloc0 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 [2024-07-15 07:46:17.632957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:26.632 test case1: single bdev can't be used in multiple subsystems 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 [2024-07-15 07:46:17.656705] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:26.632 [2024-07-15 07:46:17.656748] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:26.632 [2024-07-15 07:46:17.656772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.632 request: 00:19:26.632 { 00:19:26.632 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:26.632 "namespace": { 00:19:26.632 "bdev_name": "Malloc0", 00:19:26.632 "no_auto_visible": false 00:19:26.632 }, 00:19:26.632 "method": "nvmf_subsystem_add_ns", 00:19:26.632 "req_id": 1 00:19:26.632 } 00:19:26.632 Got JSON-RPC error response 00:19:26.632 response: 00:19:26.632 { 00:19:26.632 "code": -32602, 00:19:26.632 "message": "Invalid parameters" 00:19:26.632 } 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:26.632 Adding namespace failed - expected result. 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:26.632 test case2: host connect to nvmf target in multiple paths 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:26.632 [2024-07-15 07:46:17.664842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.632 07:46:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:27.202 07:46:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:27.768 07:46:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:27.768 07:46:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.768 07:46:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.768 07:46:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:27.768 07:46:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:30.306 07:46:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:30.306 [global] 00:19:30.306 thread=1 00:19:30.306 invalidate=1 00:19:30.306 rw=write 00:19:30.306 time_based=1 00:19:30.306 runtime=1 00:19:30.306 ioengine=libaio 00:19:30.306 direct=1 00:19:30.306 bs=4096 00:19:30.306 iodepth=1 00:19:30.306 norandommap=0 00:19:30.306 numjobs=1 00:19:30.306 00:19:30.306 verify_dump=1 00:19:30.306 verify_backlog=512 00:19:30.306 verify_state_save=0 00:19:30.306 do_verify=1 00:19:30.306 verify=crc32c-intel 00:19:30.306 [job0] 00:19:30.306 filename=/dev/nvme0n1 00:19:30.306 Could not set queue depth (nvme0n1) 00:19:30.306 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.306 fio-3.35 00:19:30.306 Starting 1 thread 00:19:31.240 00:19:31.240 job0: (groupid=0, jobs=1): err= 0: pid=1077882: Mon Jul 15 07:46:22 2024 00:19:31.240 read: IOPS=79, BW=319KiB/s (327kB/s)(332KiB/1040msec) 00:19:31.240 slat (nsec): min=7427, max=33208, avg=14749.35, stdev=6257.19 00:19:31.240 clat (usec): min=348, max=41397, avg=10674.10, stdev=17746.38 00:19:31.240 lat (usec): min=356, max=41416, avg=10688.85, stdev=17748.94 00:19:31.240 clat percentiles (usec): 00:19:31.240 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:19:31.240 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 449], 00:19:31.240 | 70.00th=[ 506], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:31.240 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:31.240 | 99.99th=[41157] 00:19:31.240 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:19:31.240 slat (nsec): min=6351, max=75560, avg=18951.25, stdev=12200.46 00:19:31.240 clat (usec): min=198, max=454, avg=274.22, stdev=58.92 00:19:31.240 lat (usec): min=206, max=490, avg=293.17, stdev=66.68 00:19:31.240 clat percentiles (usec): 00:19:31.240 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:19:31.240 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 273], 00:19:31.240 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 392], 00:19:31.240 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 453], 99.95th=[ 453], 00:19:31.240 | 99.99th=[ 453] 00:19:31.240 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.240 lat (usec) : 250=41.51%, 500=54.29%, 750=0.67% 00:19:31.240 lat (msec) : 50=3.53% 00:19:31.240 cpu : usr=0.67%, sys=0.77%, ctx=595, majf=0, minf=2 00:19:31.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.240 issued rwts: total=83,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.240 00:19:31.240 Run status group 0 (all jobs): 00:19:31.240 READ: bw=319KiB/s (327kB/s), 319KiB/s-319KiB/s (327kB/s-327kB/s), io=332KiB (340kB), run=1040-1040msec 00:19:31.240 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:19:31.240 00:19:31.240 Disk stats (read/write): 00:19:31.240 nvme0n1: ios=129/512, merge=0/0, ticks=971/137, in_queue=1108, util=96.39% 00:19:31.240 07:46:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:31.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.499 rmmod nvme_tcp 00:19:31.499 rmmod nvme_fabrics 00:19:31.499 rmmod nvme_keyring 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1077239 ']' 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1077239 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1077239 ']' 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1077239 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.499 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077239 00:19:31.758 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:31.758 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:31.758 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077239' 00:19:31.758 killing process with pid 1077239 00:19:31.758 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1077239 00:19:31.758 07:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1077239 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.137 07:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.039 07:46:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.039 00:19:35.039 real 0m11.944s 00:19:35.039 user 0m28.078s 00:19:35.039 sys 0m2.517s 00:19:35.039 07:46:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.039 07:46:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:35.039 ************************************ 00:19:35.039 END TEST nvmf_nmic 00:19:35.039 ************************************ 00:19:35.297 07:46:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.297 07:46:26 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:35.297 07:46:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.297 07:46:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.297 07:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.297 ************************************ 00:19:35.297 START TEST nvmf_fio_target 00:19:35.297 ************************************ 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:35.297 * Looking for test storage... 00:19:35.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.297 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.298 07:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:37.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:37.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:37.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:37.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:19:37.198 00:19:37.198 --- 10.0.0.2 ping statistics --- 00:19:37.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.198 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:37.198 00:19:37.198 --- 10.0.0.1 ping statistics --- 00:19:37.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.198 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.198 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.199 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1080094 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1080094 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1080094 ']' 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.457 07:46:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.457 [2024-07-15 07:46:28.538863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:37.457 [2024-07-15 07:46:28.539053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.457 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.457 [2024-07-15 07:46:28.675252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.714 [2024-07-15 07:46:28.936666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.714 [2024-07-15 07:46:28.936739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.714 [2024-07-15 07:46:28.936767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.714 [2024-07-15 07:46:28.936788] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.715 [2024-07-15 07:46:28.936809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.715 [2024-07-15 07:46:28.936947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.715 [2024-07-15 07:46:28.936994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.715 [2024-07-15 07:46:28.937036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.715 [2024-07-15 07:46:28.937047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.281 07:46:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:38.539 [2024-07-15 07:46:29.699747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.539 07:46:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.107 07:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:39.107 07:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.365 07:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:39.365 07:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.622 07:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:39.622 07:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.880 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:39.880 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:40.138 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.396 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:40.396 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.964 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:40.964 07:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.229 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:41.229 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:41.544 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:41.544 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:41.544 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.801 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:41.801 07:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:42.059 07:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.317 [2024-07-15 07:46:33.470368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.317 07:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:42.575 07:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:42.833 07:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:43.769 07:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:43.769 07:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:43.769 07:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:43.769 07:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:43.769 07:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:43.769 07:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:45.672 07:46:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:45.672 [global] 00:19:45.672 thread=1 00:19:45.672 invalidate=1 00:19:45.672 rw=write 00:19:45.672 time_based=1 00:19:45.672 runtime=1 00:19:45.672 ioengine=libaio 00:19:45.672 direct=1 00:19:45.672 bs=4096 00:19:45.672 iodepth=1 00:19:45.672 norandommap=0 00:19:45.672 numjobs=1 00:19:45.672 00:19:45.672 verify_dump=1 00:19:45.672 verify_backlog=512 00:19:45.672 verify_state_save=0 00:19:45.672 do_verify=1 00:19:45.672 verify=crc32c-intel 00:19:45.672 [job0] 00:19:45.672 filename=/dev/nvme0n1 00:19:45.672 [job1] 00:19:45.672 filename=/dev/nvme0n2 00:19:45.672 [job2] 00:19:45.672 filename=/dev/nvme0n3 00:19:45.672 [job3] 00:19:45.672 filename=/dev/nvme0n4 00:19:45.672 Could not set queue depth (nvme0n1) 00:19:45.672 Could not set queue depth (nvme0n2) 00:19:45.672 Could not set queue depth (nvme0n3) 00:19:45.672 Could not set queue depth (nvme0n4) 00:19:45.930 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.930 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.930 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.930 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.930 fio-3.35 00:19:45.930 Starting 4 threads 00:19:47.306 00:19:47.306 job0: (groupid=0, jobs=1): err= 0: pid=1081289: Mon Jul 15 07:46:38 2024 00:19:47.306 read: IOPS=1101, BW=4405KiB/s (4511kB/s)(4568KiB/1037msec) 00:19:47.306 slat (nsec): min=5260, max=37014, avg=10905.63, stdev=5751.70 00:19:47.306 clat (usec): min=291, max=41702, avg=502.12, stdev=2408.18 00:19:47.306 lat (usec): min=297, max=41715, avg=513.03, stdev=2408.99 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:19:47.306 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:19:47.306 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 482], 00:19:47.306 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41681], 00:19:47.306 | 99.99th=[41681] 00:19:47.306 write: IOPS=1481, BW=5925KiB/s (6067kB/s)(6144KiB/1037msec); 0 zone resets 00:19:47.306 slat (nsec): min=6669, max=63131, avg=15415.75, stdev=9300.81 00:19:47.306 clat (usec): min=206, max=486, avg=270.94, stdev=54.26 00:19:47.306 lat (usec): min=214, max=506, avg=286.36, stdev=59.19 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:19:47.306 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:19:47.306 | 70.00th=[ 273], 80.00th=[ 302], 90.00th=[ 363], 95.00th=[ 400], 00:19:47.306 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 474], 99.95th=[ 486], 00:19:47.306 | 99.99th=[ 486] 00:19:47.306 bw ( KiB/s): min= 5256, max= 7032, per=44.44%, avg=6144.00, stdev=1255.82, samples=2 00:19:47.306 iops : min= 1314, max= 1758, avg=1536.00, stdev=313.96, samples=2 00:19:47.306 lat (usec) : 250=26.62%, 500=72.26%, 750=0.97% 00:19:47.306 lat (msec) : 50=0.15% 00:19:47.306 cpu : usr=2.90%, sys=4.54%, ctx=2678, majf=0, minf=1 00:19:47.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.306 issued rwts: total=1142,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.306 job1: (groupid=0, jobs=1): err= 0: pid=1081294: Mon Jul 15 07:46:38 2024 00:19:47.306 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:19:47.306 slat (nsec): min=12969, max=34448, avg=21783.43, stdev=8593.92 00:19:47.306 clat (usec): min=388, max=42008, avg=39502.84, stdev=8977.54 00:19:47.306 lat (usec): min=408, max=42025, avg=39524.62, stdev=8977.93 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:47.306 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:19:47.306 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:47.306 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:47.306 | 99.99th=[42206] 00:19:47.306 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:19:47.306 slat (nsec): min=6718, max=40113, avg=13961.77, stdev=5857.03 00:19:47.306 clat (usec): min=257, max=617, avg=346.23, stdev=51.16 00:19:47.306 lat (usec): min=267, max=648, avg=360.19, stdev=52.61 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 297], 00:19:47.306 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 363], 00:19:47.306 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 412], 00:19:47.306 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[ 619], 99.95th=[ 619], 00:19:47.306 | 99.99th=[ 619] 00:19:47.306 bw ( KiB/s): min= 4096, max= 4096, per=29.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.306 lat (usec) : 500=95.68%, 750=0.56% 00:19:47.306 lat (msec) : 50=3.75% 00:19:47.306 cpu : usr=0.49%, sys=0.49%, ctx=534, majf=0, minf=1 00:19:47.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.306 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.306 job2: (groupid=0, jobs=1): err= 0: pid=1081295: Mon Jul 15 07:46:38 2024 00:19:47.306 read: IOPS=541, BW=2165KiB/s (2217kB/s)(2180KiB/1007msec) 00:19:47.306 slat (nsec): min=6141, max=68267, avg=23420.62, stdev=11405.50 00:19:47.306 clat (usec): min=333, max=41020, avg=1096.92, stdev=5162.42 00:19:47.306 lat (usec): min=339, max=41032, avg=1120.34, stdev=5161.36 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 375], 20.00th=[ 392], 00:19:47.306 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 437], 00:19:47.306 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 510], 00:19:47.306 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:47.306 | 99.99th=[41157] 00:19:47.306 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:19:47.306 slat (nsec): min=6033, max=67642, avg=21441.16, stdev=12533.51 00:19:47.306 clat (usec): min=247, max=502, avg=355.84, stdev=44.16 00:19:47.306 lat (usec): min=254, max=531, avg=377.28, stdev=49.01 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:19:47.306 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 371], 00:19:47.306 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 429], 00:19:47.306 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 498], 99.95th=[ 502], 00:19:47.306 | 99.99th=[ 502] 00:19:47.306 bw ( KiB/s): min= 4096, max= 4096, per=29.63%, avg=4096.00, stdev= 0.00, samples=2 00:19:47.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:47.306 lat (usec) : 250=0.06%, 500=97.26%, 750=2.10% 00:19:47.306 lat (msec) : 50=0.57% 00:19:47.306 cpu : usr=1.59%, sys=3.68%, ctx=1569, majf=0, minf=2 00:19:47.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.306 issued rwts: total=545,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.306 job3: (groupid=0, jobs=1): err= 0: pid=1081296: Mon Jul 15 07:46:38 2024 00:19:47.306 read: IOPS=251, BW=1005KiB/s (1029kB/s)(1032KiB/1027msec) 00:19:47.306 slat (nsec): min=6073, max=36763, avg=9143.24, stdev=7171.87 00:19:47.306 clat (usec): min=291, max=42965, avg=3061.05, stdev=10193.10 00:19:47.306 lat (usec): min=298, max=42994, avg=3070.19, stdev=10197.77 00:19:47.306 clat percentiles (usec): 00:19:47.306 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:19:47.306 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 359], 00:19:47.306 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 537], 95.00th=[41157], 00:19:47.306 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:47.306 | 99.99th=[42730] 00:19:47.306 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:19:47.306 slat (usec): min=7, max=41925, avg=173.19, stdev=2578.13 00:19:47.307 clat (usec): min=208, max=557, avg=279.66, stdev=63.86 00:19:47.307 lat (usec): min=217, max=42264, avg=452.85, stdev=2588.05 00:19:47.307 clat percentiles (usec): 00:19:47.307 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:19:47.307 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 260], 00:19:47.307 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 408], 00:19:47.307 | 99.00th=[ 478], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 562], 00:19:47.307 | 99.99th=[ 562] 00:19:47.307 bw ( KiB/s): min= 4096, max= 4096, per=29.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.307 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.307 lat (usec) : 250=35.58%, 500=60.39%, 750=1.82% 00:19:47.307 lat (msec) : 50=2.21% 00:19:47.307 cpu : usr=0.29%, sys=1.27%, ctx=774, majf=0, minf=1 00:19:47.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.307 issued rwts: total=258,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.307 00:19:47.307 Run status group 0 (all jobs): 00:19:47.307 READ: bw=7583KiB/s (7765kB/s), 82.6KiB/s-4405KiB/s (84.6kB/s-4511kB/s), io=7864KiB (8053kB), run=1007-1037msec 00:19:47.307 WRITE: bw=13.5MiB/s (14.2MB/s), 1994KiB/s-5925KiB/s (2042kB/s-6067kB/s), io=14.0MiB (14.7MB), run=1007-1037msec 00:19:47.307 00:19:47.307 Disk stats (read/write): 00:19:47.307 nvme0n1: ios=1074/1266, merge=0/0, ticks=487/315, in_queue=802, util=85.27% 00:19:47.307 nvme0n2: ios=58/512, merge=0/0, ticks=774/166, in_queue=940, util=88.76% 00:19:47.307 nvme0n3: ios=569/707, merge=0/0, ticks=598/239, in_queue=837, util=92.75% 00:19:47.307 nvme0n4: ios=275/512, merge=0/0, ticks=1489/139, in_queue=1628, util=98.49% 00:19:47.307 07:46:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:47.307 [global] 00:19:47.307 thread=1 00:19:47.307 invalidate=1 00:19:47.307 rw=randwrite 00:19:47.307 time_based=1 00:19:47.307 runtime=1 00:19:47.307 ioengine=libaio 00:19:47.307 direct=1 00:19:47.307 bs=4096 00:19:47.307 iodepth=1 00:19:47.307 norandommap=0 00:19:47.307 numjobs=1 00:19:47.307 00:19:47.307 verify_dump=1 00:19:47.307 verify_backlog=512 00:19:47.307 verify_state_save=0 00:19:47.307 do_verify=1 00:19:47.307 verify=crc32c-intel 00:19:47.307 [job0] 00:19:47.307 filename=/dev/nvme0n1 00:19:47.307 [job1] 00:19:47.307 filename=/dev/nvme0n2 00:19:47.307 [job2] 00:19:47.307 filename=/dev/nvme0n3 00:19:47.307 [job3] 00:19:47.307 filename=/dev/nvme0n4 00:19:47.307 Could not set queue depth (nvme0n1) 00:19:47.307 Could not set queue depth (nvme0n2) 00:19:47.307 Could not set queue depth (nvme0n3) 00:19:47.307 Could not set queue depth (nvme0n4) 00:19:47.307 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.307 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.307 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.307 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.307 fio-3.35 00:19:47.307 Starting 4 threads 00:19:48.678 00:19:48.678 job0: (groupid=0, jobs=1): err= 0: pid=1081524: Mon Jul 15 07:46:39 2024 00:19:48.678 read: IOPS=1480, BW=5922KiB/s (6064kB/s)(5928KiB/1001msec) 00:19:48.678 slat (nsec): min=5590, max=49779, avg=9667.61, stdev=4962.39 00:19:48.678 clat (usec): min=278, max=41230, avg=383.30, stdev=1063.31 00:19:48.678 lat (usec): min=284, max=41237, avg=392.96, stdev=1063.30 00:19:48.678 clat percentiles (usec): 00:19:48.678 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:19:48.678 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:19:48.678 | 70.00th=[ 367], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 461], 00:19:48.678 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 627], 99.95th=[41157], 00:19:48.678 | 99.99th=[41157] 00:19:48.678 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:48.678 slat (nsec): min=6886, max=57086, avg=13629.01, stdev=7203.07 00:19:48.678 clat (usec): min=195, max=995, avg=251.30, stdev=43.44 00:19:48.678 lat (usec): min=202, max=1005, avg=264.93, stdev=47.67 00:19:48.678 clat percentiles (usec): 00:19:48.678 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:19:48.678 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 251], 00:19:48.678 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:19:48.678 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 693], 99.95th=[ 996], 00:19:48.678 | 99.99th=[ 996] 00:19:48.678 bw ( KiB/s): min= 8112, max= 8112, per=51.05%, avg=8112.00, stdev= 0.00, samples=1 00:19:48.678 iops : min= 2028, max= 2028, avg=2028.00, stdev= 0.00, samples=1 00:19:48.678 lat (usec) : 250=29.72%, 500=68.06%, 750=2.15%, 1000=0.03% 00:19:48.678 lat (msec) : 50=0.03% 00:19:48.678 cpu : usr=3.00%, sys=4.60%, ctx=3019, majf=0, minf=2 00:19:48.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.678 issued rwts: total=1482,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.678 job1: (groupid=0, jobs=1): err= 0: pid=1081525: Mon Jul 15 07:46:39 2024 00:19:48.678 read: IOPS=19, BW=79.6KiB/s (81.5kB/s)(80.0KiB/1005msec) 00:19:48.678 slat (nsec): min=13226, max=33860, avg=18423.85, stdev=7901.64 00:19:48.678 clat (usec): min=40663, max=42018, avg=41612.88, stdev=521.59 00:19:48.678 lat (usec): min=40677, max=42033, avg=41631.30, stdev=519.97 00:19:48.678 clat percentiles (usec): 00:19:48.678 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:48.678 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:48.678 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:48.678 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:48.678 | 99.99th=[42206] 00:19:48.678 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:19:48.678 slat (nsec): min=6785, max=32146, avg=13582.25, stdev=4756.68 00:19:48.678 clat (usec): min=205, max=460, avg=319.17, stdev=47.66 00:19:48.678 lat (usec): min=215, max=475, avg=332.75, stdev=48.76 00:19:48.678 clat percentiles (usec): 00:19:48.678 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 265], 20.00th=[ 285], 00:19:48.678 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:19:48.678 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 392], 95.00th=[ 400], 00:19:48.678 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 461], 00:19:48.678 | 99.99th=[ 461] 00:19:48.678 bw ( KiB/s): min= 4096, max= 4096, per=25.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.678 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.678 lat (usec) : 250=7.33%, 500=88.91% 00:19:48.678 lat (msec) : 50=3.76% 00:19:48.678 cpu : usr=0.30%, sys=0.80%, ctx=534, majf=0, minf=1 00:19:48.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.679 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.679 job2: (groupid=0, jobs=1): err= 0: pid=1081526: Mon Jul 15 07:46:39 2024 00:19:48.679 read: IOPS=289, BW=1156KiB/s (1184kB/s)(1192KiB/1031msec) 00:19:48.679 slat (nsec): min=6032, max=34615, avg=8517.49, stdev=5024.73 00:19:48.679 clat (usec): min=358, max=41520, avg=2865.47, stdev=9691.54 00:19:48.679 lat (usec): min=365, max=41532, avg=2873.99, stdev=9694.72 00:19:48.679 clat percentiles (usec): 00:19:48.679 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 400], 00:19:48.679 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 412], 60.00th=[ 420], 00:19:48.679 | 70.00th=[ 424], 80.00th=[ 429], 90.00th=[ 445], 95.00th=[40633], 00:19:48.679 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:48.679 | 99.99th=[41681] 00:19:48.679 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:19:48.679 slat (nsec): min=7455, max=40714, avg=12745.80, stdev=5115.65 00:19:48.679 clat (usec): min=227, max=1014, avg=322.99, stdev=72.92 00:19:48.679 lat (usec): min=236, max=1028, avg=335.74, stdev=73.02 00:19:48.679 clat percentiles (usec): 00:19:48.679 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 273], 00:19:48.679 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 326], 00:19:48.679 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 420], 00:19:48.679 | 99.00th=[ 529], 99.50th=[ 791], 99.90th=[ 1012], 99.95th=[ 1012], 00:19:48.679 | 99.99th=[ 1012] 00:19:48.679 bw ( KiB/s): min= 4096, max= 4096, per=25.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.679 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.679 lat (usec) : 250=5.56%, 500=91.11%, 750=0.74%, 1000=0.25% 00:19:48.679 lat (msec) : 2=0.12%, 50=2.22% 00:19:48.679 cpu : usr=0.78%, sys=0.97%, ctx=810, majf=0, minf=1 00:19:48.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.679 issued rwts: total=298,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.679 job3: (groupid=0, jobs=1): err= 0: pid=1081527: Mon Jul 15 07:46:39 2024 00:19:48.679 read: IOPS=1500, BW=6002KiB/s (6146kB/s)(6008KiB/1001msec) 00:19:48.679 slat (nsec): min=5694, max=58565, avg=10141.58, stdev=5407.90 00:19:48.679 clat (usec): min=290, max=1149, avg=356.37, stdev=67.95 00:19:48.679 lat (usec): min=296, max=1155, avg=366.51, stdev=68.49 00:19:48.679 clat percentiles (usec): 00:19:48.679 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:19:48.679 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:19:48.679 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 429], 95.00th=[ 510], 00:19:48.679 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 1156], 00:19:48.679 | 99.99th=[ 1156] 00:19:48.679 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:48.679 slat (nsec): min=7052, max=58126, avg=14353.80, stdev=6613.20 00:19:48.679 clat (usec): min=208, max=991, avg=271.16, stdev=55.57 00:19:48.679 lat (usec): min=216, max=1004, avg=285.51, stdev=56.77 00:19:48.679 clat percentiles (usec): 00:19:48.679 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:19:48.679 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:19:48.679 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 379], 00:19:48.679 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[ 758], 99.95th=[ 996], 00:19:48.679 | 99.99th=[ 996] 00:19:48.679 bw ( KiB/s): min= 7952, max= 7952, per=50.04%, avg=7952.00, stdev= 0.00, samples=1 00:19:48.679 iops : min= 1988, max= 1988, avg=1988.00, stdev= 0.00, samples=1 00:19:48.679 lat (usec) : 250=19.59%, 500=77.65%, 750=2.67%, 1000=0.07% 00:19:48.679 lat (msec) : 2=0.03% 00:19:48.679 cpu : usr=2.40%, sys=5.50%, ctx=3038, majf=0, minf=1 00:19:48.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.679 issued rwts: total=1502,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.679 00:19:48.679 Run status group 0 (all jobs): 00:19:48.679 READ: bw=12.5MiB/s (13.1MB/s), 79.6KiB/s-6002KiB/s (81.5kB/s-6146kB/s), io=12.9MiB (13.5MB), run=1001-1031msec 00:19:48.679 WRITE: bw=15.5MiB/s (16.3MB/s), 1986KiB/s-6138KiB/s (2034kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1031msec 00:19:48.679 00:19:48.679 Disk stats (read/write): 00:19:48.679 nvme0n1: ios=1131/1536, merge=0/0, ticks=558/368, in_queue=926, util=95.79% 00:19:48.679 nvme0n2: ios=57/512, merge=0/0, ticks=877/166, in_queue=1043, util=92.69% 00:19:48.679 nvme0n3: ios=350/512, merge=0/0, ticks=755/158, in_queue=913, util=91.66% 00:19:48.679 nvme0n4: ios=1157/1536, merge=0/0, ticks=480/406, in_queue=886, util=95.90% 00:19:48.679 07:46:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:48.679 [global] 00:19:48.679 thread=1 00:19:48.679 invalidate=1 00:19:48.679 rw=write 00:19:48.679 time_based=1 00:19:48.679 runtime=1 00:19:48.679 ioengine=libaio 00:19:48.679 direct=1 00:19:48.679 bs=4096 00:19:48.679 iodepth=128 00:19:48.679 norandommap=0 00:19:48.679 numjobs=1 00:19:48.679 00:19:48.679 verify_dump=1 00:19:48.679 verify_backlog=512 00:19:48.679 verify_state_save=0 00:19:48.679 do_verify=1 00:19:48.679 verify=crc32c-intel 00:19:48.679 [job0] 00:19:48.679 filename=/dev/nvme0n1 00:19:48.679 [job1] 00:19:48.679 filename=/dev/nvme0n2 00:19:48.679 [job2] 00:19:48.679 filename=/dev/nvme0n3 00:19:48.679 [job3] 00:19:48.679 filename=/dev/nvme0n4 00:19:48.679 Could not set queue depth (nvme0n1) 00:19:48.679 Could not set queue depth (nvme0n2) 00:19:48.679 Could not set queue depth (nvme0n3) 00:19:48.679 Could not set queue depth (nvme0n4) 00:19:48.679 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.679 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.679 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.679 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.679 fio-3.35 00:19:48.679 Starting 4 threads 00:19:50.055 00:19:50.055 job0: (groupid=0, jobs=1): err= 0: pid=1081831: Mon Jul 15 07:46:41 2024 00:19:50.055 read: IOPS=1080, BW=4323KiB/s (4427kB/s)(4392KiB/1016msec) 00:19:50.055 slat (usec): min=3, max=42375, avg=374.80, stdev=2336.79 00:19:50.055 clat (msec): min=2, max=101, avg=45.78, stdev=20.72 00:19:50.055 lat (msec): min=17, max=101, avg=46.16, stdev=20.88 00:19:50.055 clat percentiles (msec): 00:19:50.055 | 1.00th=[ 21], 5.00th=[ 21], 10.00th=[ 21], 20.00th=[ 23], 00:19:50.055 | 30.00th=[ 32], 40.00th=[ 40], 50.00th=[ 44], 60.00th=[ 50], 00:19:50.055 | 70.00th=[ 57], 80.00th=[ 67], 90.00th=[ 74], 95.00th=[ 86], 00:19:50.055 | 99.00th=[ 92], 99.50th=[ 92], 99.90th=[ 96], 99.95th=[ 102], 00:19:50.055 | 99.99th=[ 102] 00:19:50.055 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:19:50.055 slat (usec): min=4, max=54504, avg=385.10, stdev=2163.15 00:19:50.055 clat (msec): min=17, max=115, avg=48.82, stdev=24.02 00:19:50.055 lat (msec): min=17, max=115, avg=49.21, stdev=24.13 00:19:50.055 clat percentiles (msec): 00:19:50.055 | 1.00th=[ 18], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 28], 00:19:50.055 | 30.00th=[ 29], 40.00th=[ 35], 50.00th=[ 47], 60.00th=[ 54], 00:19:50.055 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 102], 00:19:50.055 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 116], 00:19:50.055 | 99.99th=[ 116] 00:19:50.055 bw ( KiB/s): min= 4496, max= 7360, per=11.12%, avg=5928.00, stdev=2025.15, samples=2 00:19:50.055 iops : min= 1124, max= 1840, avg=1482.00, stdev=506.29, samples=2 00:19:50.055 lat (msec) : 4=0.04%, 20=6.49%, 50=51.10%, 100=39.14%, 250=3.23% 00:19:50.055 cpu : usr=0.79%, sys=2.86%, ctx=182, majf=0, minf=15 00:19:50.055 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:50.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.055 issued rwts: total=1098,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.055 job1: (groupid=0, jobs=1): err= 0: pid=1081853: Mon Jul 15 07:46:41 2024 00:19:50.055 read: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1015msec) 00:19:50.055 slat (usec): min=2, max=8184, avg=101.82, stdev=582.53 00:19:50.055 clat (usec): min=5977, max=33479, avg=12685.37, stdev=2876.75 00:19:50.055 lat (usec): min=5983, max=33483, avg=12787.19, stdev=2896.77 00:19:50.055 clat percentiles (usec): 00:19:50.055 | 1.00th=[ 6063], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[11207], 00:19:50.055 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:50.055 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15664], 95.00th=[16712], 00:19:50.055 | 99.00th=[23725], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:19:50.055 | 99.99th=[33424] 00:19:50.055 write: IOPS=4777, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1015msec); 0 zone resets 00:19:50.055 slat (usec): min=3, max=21902, avg=104.98, stdev=716.74 00:19:50.055 clat (usec): min=4406, max=48129, avg=14509.05, stdev=4353.69 00:19:50.055 lat (usec): min=6700, max=48150, avg=14614.03, stdev=4413.56 00:19:50.055 clat percentiles (usec): 00:19:50.055 | 1.00th=[ 8160], 5.00th=[10683], 10.00th=[11863], 20.00th=[12387], 00:19:50.055 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:19:50.055 | 70.00th=[13829], 80.00th=[14222], 90.00th=[21103], 95.00th=[26084], 00:19:50.055 | 99.00th=[29230], 99.50th=[29230], 99.90th=[30540], 99.95th=[34341], 00:19:50.055 | 99.99th=[47973] 00:19:50.055 bw ( KiB/s): min=18512, max=19256, per=35.43%, avg=18884.00, stdev=526.09, samples=2 00:19:50.055 iops : min= 4628, max= 4814, avg=4721.00, stdev=131.52, samples=2 00:19:50.055 lat (msec) : 10=8.55%, 20=85.42%, 50=6.03% 00:19:50.055 cpu : usr=4.24%, sys=6.31%, ctx=566, majf=0, minf=11 00:19:50.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:50.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.055 issued rwts: total=4608,4849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.055 job2: (groupid=0, jobs=1): err= 0: pid=1081876: Mon Jul 15 07:46:41 2024 00:19:50.055 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.2MiB/1017msec) 00:19:50.055 slat (usec): min=3, max=25243, avg=170.80, stdev=988.67 00:19:50.055 clat (usec): min=2227, max=43458, avg=19966.11, stdev=4391.88 00:19:50.055 lat (usec): min=9628, max=43477, avg=20136.91, stdev=4461.23 00:19:50.055 clat percentiles (usec): 00:19:50.055 | 1.00th=[11994], 5.00th=[13698], 10.00th=[15926], 20.00th=[17695], 00:19:50.055 | 30.00th=[17957], 40.00th=[18482], 50.00th=[19006], 60.00th=[19530], 00:19:50.055 | 70.00th=[20579], 80.00th=[22414], 90.00th=[27132], 95.00th=[28181], 00:19:50.055 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:19:50.055 | 99.99th=[43254] 00:19:50.055 write: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec); 0 zone resets 00:19:50.055 slat (usec): min=3, max=37119, avg=144.74, stdev=1198.58 00:19:50.055 clat (usec): min=6479, max=85573, avg=23297.16, stdev=10554.54 00:19:50.055 lat (usec): min=6489, max=85580, avg=23441.90, stdev=10636.71 00:19:50.055 clat percentiles (usec): 00:19:50.055 | 1.00th=[10028], 5.00th=[11863], 10.00th=[13173], 20.00th=[15139], 00:19:50.055 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17957], 60.00th=[25560], 00:19:50.055 | 70.00th=[27919], 80.00th=[30540], 90.00th=[38011], 95.00th=[40633], 00:19:50.055 | 99.00th=[60031], 99.50th=[65799], 99.90th=[82314], 99.95th=[85459], 00:19:50.055 | 99.99th=[85459] 00:19:50.055 bw ( KiB/s): min=12288, max=12288, per=23.05%, avg=12288.00, stdev= 0.00, samples=2 00:19:50.055 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:19:50.055 lat (msec) : 4=0.02%, 10=0.89%, 20=56.54%, 50=41.56%, 100=0.99% 00:19:50.055 cpu : usr=2.17%, sys=3.94%, ctx=244, majf=0, minf=11 00:19:50.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:50.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.055 issued rwts: total=2876,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.055 job3: (groupid=0, jobs=1): err= 0: pid=1081877: Mon Jul 15 07:46:41 2024 00:19:50.055 read: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:19:50.055 slat (usec): min=2, max=7358, avg=119.52, stdev=686.34 00:19:50.055 clat (usec): min=1446, max=25116, avg=15203.59, stdev=2288.17 00:19:50.055 lat (usec): min=5001, max=25124, avg=15323.11, stdev=2333.84 00:19:50.055 clat percentiles (usec): 00:19:50.055 | 1.00th=[ 5473], 5.00th=[11600], 10.00th=[13042], 20.00th=[13960], 00:19:50.055 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15270], 00:19:50.055 | 70.00th=[15664], 80.00th=[16712], 90.00th=[17957], 95.00th=[19006], 00:19:50.055 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22152], 99.95th=[22414], 00:19:50.055 | 99.99th=[25035] 00:19:50.055 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:50.055 slat (usec): min=3, max=7291, avg=118.67, stdev=627.10 00:19:50.055 clat (usec): min=8419, max=23340, avg=15936.07, stdev=1855.40 00:19:50.055 lat (usec): min=8426, max=23355, avg=16054.75, stdev=1911.72 00:19:50.055 clat percentiles (usec): 00:19:50.055 | 1.00th=[ 9634], 5.00th=[13566], 10.00th=[14091], 20.00th=[14615], 00:19:50.055 | 30.00th=[15270], 40.00th=[15664], 50.00th=[15926], 60.00th=[16319], 00:19:50.055 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[18482], 00:19:50.055 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23200], 99.95th=[23200], 00:19:50.055 | 99.99th=[23462] 00:19:50.055 bw ( KiB/s): min=16384, max=16384, per=30.74%, avg=16384.00, stdev= 0.00, samples=2 00:19:50.055 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:50.055 lat (msec) : 2=0.01%, 10=1.39%, 20=95.21%, 50=3.39% 00:19:50.055 cpu : usr=3.99%, sys=6.18%, ctx=422, majf=0, minf=13 00:19:50.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:50.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.055 issued rwts: total=4054,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.055 00:19:50.055 Run status group 0 (all jobs): 00:19:50.055 READ: bw=48.5MiB/s (50.9MB/s), 4323KiB/s-17.7MiB/s (4427kB/s-18.6MB/s), io=49.4MiB (51.8MB), run=1004-1017msec 00:19:50.055 WRITE: bw=52.1MiB/s (54.6MB/s), 6047KiB/s-18.7MiB/s (6192kB/s-19.6MB/s), io=52.9MiB (55.5MB), run=1004-1017msec 00:19:50.055 00:19:50.055 Disk stats (read/write): 00:19:50.055 nvme0n1: ios=1074/1239, merge=0/0, ticks=15282/19517, in_queue=34799, util=86.57% 00:19:50.055 nvme0n2: ios=3744/4096, merge=0/0, ticks=24836/33827, in_queue=58663, util=86.67% 00:19:50.055 nvme0n3: ios=2350/2560, merge=0/0, ticks=24662/41945, in_queue=66607, util=89.72% 00:19:50.055 nvme0n4: ios=3189/3584, merge=0/0, ticks=24619/26751, in_queue=51370, util=89.52% 00:19:50.055 07:46:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:50.055 [global] 00:19:50.055 thread=1 00:19:50.055 invalidate=1 00:19:50.055 rw=randwrite 00:19:50.055 time_based=1 00:19:50.055 runtime=1 00:19:50.055 ioengine=libaio 00:19:50.055 direct=1 00:19:50.055 bs=4096 00:19:50.055 iodepth=128 00:19:50.055 norandommap=0 00:19:50.055 numjobs=1 00:19:50.055 00:19:50.055 verify_dump=1 00:19:50.055 verify_backlog=512 00:19:50.055 verify_state_save=0 00:19:50.055 do_verify=1 00:19:50.055 verify=crc32c-intel 00:19:50.055 [job0] 00:19:50.055 filename=/dev/nvme0n1 00:19:50.055 [job1] 00:19:50.055 filename=/dev/nvme0n2 00:19:50.055 [job2] 00:19:50.055 filename=/dev/nvme0n3 00:19:50.055 [job3] 00:19:50.055 filename=/dev/nvme0n4 00:19:50.055 Could not set queue depth (nvme0n1) 00:19:50.055 Could not set queue depth (nvme0n2) 00:19:50.055 Could not set queue depth (nvme0n3) 00:19:50.055 Could not set queue depth (nvme0n4) 00:19:50.314 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.314 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.314 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.314 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.314 fio-3.35 00:19:50.314 Starting 4 threads 00:19:51.692 00:19:51.692 job0: (groupid=0, jobs=1): err= 0: pid=1082103: Mon Jul 15 07:46:42 2024 00:19:51.692 read: IOPS=2193, BW=8775KiB/s (8985kB/s)(8836KiB/1007msec) 00:19:51.692 slat (usec): min=2, max=34574, avg=233.25, stdev=1818.03 00:19:51.692 clat (msec): min=4, max=156, avg=26.44, stdev=24.76 00:19:51.692 lat (msec): min=6, max=156, avg=26.68, stdev=24.95 00:19:51.692 clat percentiles (msec): 00:19:51.692 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 15], 00:19:51.692 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 21], 00:19:51.692 | 70.00th=[ 24], 80.00th=[ 28], 90.00th=[ 43], 95.00th=[ 64], 00:19:51.692 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:19:51.692 | 99.99th=[ 157] 00:19:51.692 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:19:51.692 slat (usec): min=3, max=28250, avg=184.01, stdev=1452.49 00:19:51.692 clat (msec): min=7, max=129, avg=26.85, stdev=21.21 00:19:51.692 lat (msec): min=7, max=129, avg=27.04, stdev=21.30 00:19:51.692 clat percentiles (msec): 00:19:51.692 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 14], 20.00th=[ 14], 00:19:51.692 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 22], 00:19:51.692 | 70.00th=[ 28], 80.00th=[ 38], 90.00th=[ 53], 95.00th=[ 78], 00:19:51.692 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 130], 00:19:51.692 | 99.99th=[ 130] 00:19:51.692 bw ( KiB/s): min= 8192, max=12288, per=20.21%, avg=10240.00, stdev=2896.31, samples=2 00:19:51.692 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:19:51.692 lat (msec) : 10=2.66%, 20=53.43%, 50=33.49%, 100=7.82%, 250=2.60% 00:19:51.693 cpu : usr=1.89%, sys=2.88%, ctx=158, majf=0, minf=11 00:19:51.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:51.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.693 issued rwts: total=2209,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.693 job1: (groupid=0, jobs=1): err= 0: pid=1082104: Mon Jul 15 07:46:42 2024 00:19:51.693 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:51.693 slat (usec): min=2, max=47875, avg=179.22, stdev=1602.51 00:19:51.693 clat (msec): min=7, max=125, avg=21.74, stdev=18.51 00:19:51.693 lat (msec): min=7, max=125, avg=21.92, stdev=18.64 00:19:51.693 clat percentiles (msec): 00:19:51.693 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:19:51.693 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 17], 00:19:51.693 | 70.00th=[ 21], 80.00th=[ 31], 90.00th=[ 43], 95.00th=[ 52], 00:19:51.693 | 99.00th=[ 105], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 109], 00:19:51.693 | 99.99th=[ 126] 00:19:51.693 write: IOPS=3124, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1003msec); 0 zone resets 00:19:51.693 slat (usec): min=3, max=35698, avg=137.10, stdev=1235.61 00:19:51.693 clat (usec): min=2689, max=58702, avg=19283.98, stdev=10653.98 00:19:51.693 lat (usec): min=3312, max=63987, avg=19421.08, stdev=10715.15 00:19:51.693 clat percentiles (usec): 00:19:51.693 | 1.00th=[ 6849], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11863], 00:19:51.693 | 30.00th=[12911], 40.00th=[13698], 50.00th=[15008], 60.00th=[17957], 00:19:51.693 | 70.00th=[20055], 80.00th=[26084], 90.00th=[35914], 95.00th=[44303], 00:19:51.693 | 99.00th=[54789], 99.50th=[54789], 99.90th=[58459], 99.95th=[58459], 00:19:51.693 | 99.99th=[58459] 00:19:51.693 bw ( KiB/s): min=11632, max=12969, per=24.27%, avg=12300.50, stdev=945.40, samples=2 00:19:51.693 iops : min= 2908, max= 3242, avg=3075.00, stdev=236.17, samples=2 00:19:51.693 lat (msec) : 4=0.24%, 10=11.46%, 20=56.88%, 50=26.89%, 100=3.83% 00:19:51.693 lat (msec) : 250=0.69% 00:19:51.693 cpu : usr=2.30%, sys=3.49%, ctx=233, majf=0, minf=7 00:19:51.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:51.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.693 issued rwts: total=3072,3134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.693 job2: (groupid=0, jobs=1): err= 0: pid=1082105: Mon Jul 15 07:46:42 2024 00:19:51.693 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:19:51.693 slat (usec): min=2, max=20373, avg=156.10, stdev=1230.82 00:19:51.693 clat (usec): min=2933, max=43076, avg=21100.47, stdev=6127.68 00:19:51.693 lat (usec): min=2953, max=43106, avg=21256.57, stdev=6215.38 00:19:51.693 clat percentiles (usec): 00:19:51.693 | 1.00th=[ 8848], 5.00th=[12125], 10.00th=[15139], 20.00th=[16319], 00:19:51.693 | 30.00th=[17433], 40.00th=[17957], 50.00th=[20579], 60.00th=[22414], 00:19:51.693 | 70.00th=[23725], 80.00th=[25822], 90.00th=[28705], 95.00th=[33817], 00:19:51.693 | 99.00th=[36439], 99.50th=[38536], 99.90th=[41157], 99.95th=[42730], 00:19:51.693 | 99.99th=[43254] 00:19:51.693 write: IOPS=3143, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1015msec); 0 zone resets 00:19:51.693 slat (usec): min=3, max=17831, avg=151.47, stdev=1142.99 00:19:51.693 clat (usec): min=539, max=78286, avg=19908.64, stdev=11741.45 00:19:51.693 lat (usec): min=1458, max=78291, avg=20060.11, stdev=11812.27 00:19:51.693 clat percentiles (usec): 00:19:51.693 | 1.00th=[ 2507], 5.00th=[ 7373], 10.00th=[ 9765], 20.00th=[13435], 00:19:51.693 | 30.00th=[15139], 40.00th=[15795], 50.00th=[17171], 60.00th=[18220], 00:19:51.693 | 70.00th=[20055], 80.00th=[23462], 90.00th=[31065], 95.00th=[44303], 00:19:51.693 | 99.00th=[69731], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:19:51.693 | 99.99th=[78119] 00:19:51.693 bw ( KiB/s): min=12240, max=12336, per=24.25%, avg=12288.00, stdev=67.88, samples=2 00:19:51.693 iops : min= 3060, max= 3084, avg=3072.00, stdev=16.97, samples=2 00:19:51.693 lat (usec) : 750=0.02% 00:19:51.693 lat (msec) : 2=0.32%, 4=0.49%, 10=5.09%, 20=53.52%, 50=38.99% 00:19:51.693 lat (msec) : 100=1.56% 00:19:51.693 cpu : usr=1.18%, sys=4.73%, ctx=211, majf=0, minf=15 00:19:51.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:51.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.693 issued rwts: total=3072,3191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.693 job3: (groupid=0, jobs=1): err= 0: pid=1082106: Mon Jul 15 07:46:42 2024 00:19:51.693 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:19:51.693 slat (usec): min=3, max=17846, avg=152.40, stdev=1146.75 00:19:51.693 clat (usec): min=5153, max=66675, avg=17863.99, stdev=6244.73 00:19:51.693 lat (usec): min=5161, max=66693, avg=18016.39, stdev=6358.11 00:19:51.693 clat percentiles (usec): 00:19:51.693 | 1.00th=[ 8848], 5.00th=[11994], 10.00th=[13566], 20.00th=[14091], 00:19:51.693 | 30.00th=[14484], 40.00th=[14877], 50.00th=[16188], 60.00th=[17433], 00:19:51.693 | 70.00th=[18744], 80.00th=[19792], 90.00th=[26346], 95.00th=[29492], 00:19:51.693 | 99.00th=[46400], 99.50th=[47449], 99.90th=[66847], 99.95th=[66847], 00:19:51.693 | 99.99th=[66847] 00:19:51.693 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1015msec); 0 zone resets 00:19:51.693 slat (usec): min=4, max=18090, avg=105.60, stdev=766.56 00:19:51.693 clat (usec): min=1592, max=66679, avg=16267.25, stdev=7336.50 00:19:51.693 lat (usec): min=1624, max=66699, avg=16372.85, stdev=7370.07 00:19:51.693 clat percentiles (usec): 00:19:51.693 | 1.00th=[ 5604], 5.00th=[ 7832], 10.00th=[ 9765], 20.00th=[11600], 00:19:51.693 | 30.00th=[13566], 40.00th=[15008], 50.00th=[15664], 60.00th=[16188], 00:19:51.693 | 70.00th=[16909], 80.00th=[17957], 90.00th=[21890], 95.00th=[28967], 00:19:51.693 | 99.00th=[51119], 99.50th=[53216], 99.90th=[53216], 99.95th=[66847], 00:19:51.693 | 99.99th=[66847] 00:19:51.693 bw ( KiB/s): min=14648, max=16120, per=30.36%, avg=15384.00, stdev=1040.86, samples=2 00:19:51.693 iops : min= 3662, max= 4030, avg=3846.00, stdev=260.22, samples=2 00:19:51.693 lat (msec) : 2=0.01%, 4=0.19%, 10=6.55%, 20=78.23%, 50=13.93% 00:19:51.693 lat (msec) : 100=1.08% 00:19:51.693 cpu : usr=3.75%, sys=7.40%, ctx=339, majf=0, minf=17 00:19:51.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:51.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.693 issued rwts: total=3584,3974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.693 00:19:51.693 Run status group 0 (all jobs): 00:19:51.693 READ: bw=45.9MiB/s (48.2MB/s), 8775KiB/s-13.8MiB/s (8985kB/s-14.5MB/s), io=46.6MiB (48.9MB), run=1003-1015msec 00:19:51.693 WRITE: bw=49.5MiB/s (51.9MB/s), 9.93MiB/s-15.3MiB/s (10.4MB/s-16.0MB/s), io=50.2MiB (52.7MB), run=1003-1015msec 00:19:51.693 00:19:51.693 Disk stats (read/write): 00:19:51.693 nvme0n1: ios=2101/2560, merge=0/0, ticks=19463/22053, in_queue=41516, util=98.20% 00:19:51.693 nvme0n2: ios=2600/2679, merge=0/0, ticks=25705/23318, in_queue=49023, util=93.70% 00:19:51.693 nvme0n3: ios=2489/2560, merge=0/0, ticks=38019/29595, in_queue=67614, util=94.26% 00:19:51.693 nvme0n4: ios=3129/3143, merge=0/0, ticks=55039/49672, in_queue=104711, util=98.42% 00:19:51.693 07:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:51.693 07:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1082248 00:19:51.693 07:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:51.693 07:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:51.693 [global] 00:19:51.693 thread=1 00:19:51.693 invalidate=1 00:19:51.693 rw=read 00:19:51.693 time_based=1 00:19:51.693 runtime=10 00:19:51.693 ioengine=libaio 00:19:51.693 direct=1 00:19:51.693 bs=4096 00:19:51.693 iodepth=1 00:19:51.693 norandommap=1 00:19:51.693 numjobs=1 00:19:51.693 00:19:51.693 [job0] 00:19:51.693 filename=/dev/nvme0n1 00:19:51.693 [job1] 00:19:51.693 filename=/dev/nvme0n2 00:19:51.693 [job2] 00:19:51.693 filename=/dev/nvme0n3 00:19:51.693 [job3] 00:19:51.693 filename=/dev/nvme0n4 00:19:51.693 Could not set queue depth (nvme0n1) 00:19:51.693 Could not set queue depth (nvme0n2) 00:19:51.693 Could not set queue depth (nvme0n3) 00:19:51.693 Could not set queue depth (nvme0n4) 00:19:51.693 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.693 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.693 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.693 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.693 fio-3.35 00:19:51.693 Starting 4 threads 00:19:54.972 07:46:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:54.972 07:46:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:54.972 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21925888, buflen=4096 00:19:54.972 fio: pid=1082339, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.972 07:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.972 07:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:54.972 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9330688, buflen=4096 00:19:54.972 fio: pid=1082338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:55.230 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=29052928, buflen=4096 00:19:55.230 fio: pid=1082336, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:55.488 07:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.488 07:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:55.746 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=24887296, buflen=4096 00:19:55.746 fio: pid=1082337, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:55.746 00:19:55.746 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1082336: Mon Jul 15 07:46:46 2024 00:19:55.746 read: IOPS=2053, BW=8214KiB/s (8411kB/s)(27.7MiB/3454msec) 00:19:55.746 slat (usec): min=5, max=15471, avg=18.75, stdev=276.40 00:19:55.746 clat (usec): min=282, max=41976, avg=461.80, stdev=1679.36 00:19:55.746 lat (usec): min=290, max=41992, avg=480.55, stdev=1702.32 00:19:55.746 clat percentiles (usec): 00:19:55.746 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 334], 00:19:55.746 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 396], 00:19:55.746 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 469], 95.00th=[ 545], 00:19:55.746 | 99.00th=[ 701], 99.50th=[ 775], 99.90th=[41157], 99.95th=[41157], 00:19:55.746 | 99.99th=[42206] 00:19:55.746 bw ( KiB/s): min= 4328, max=10640, per=37.25%, avg=8182.67, stdev=2802.39, samples=6 00:19:55.746 iops : min= 1082, max= 2660, avg=2045.67, stdev=700.60, samples=6 00:19:55.746 lat (usec) : 500=92.59%, 750=6.77%, 1000=0.42% 00:19:55.746 lat (msec) : 2=0.03%, 10=0.01%, 50=0.17% 00:19:55.746 cpu : usr=1.71%, sys=3.88%, ctx=7100, majf=0, minf=1 00:19:55.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.746 issued rwts: total=7094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.746 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1082337: Mon Jul 15 07:46:46 2024 00:19:55.746 read: IOPS=1604, BW=6416KiB/s (6570kB/s)(23.7MiB/3788msec) 00:19:55.746 slat (usec): min=5, max=29544, avg=29.45, stdev=558.19 00:19:55.746 clat (usec): min=286, max=42008, avg=586.78, stdev=2711.66 00:19:55.746 lat (usec): min=292, max=42023, avg=616.24, stdev=2768.32 00:19:55.746 clat percentiles (usec): 00:19:55.746 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 347], 00:19:55.746 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 408], 00:19:55.746 | 70.00th=[ 420], 80.00th=[ 441], 90.00th=[ 482], 95.00th=[ 529], 00:19:55.746 | 99.00th=[ 791], 99.50th=[ 1565], 99.90th=[41157], 99.95th=[41157], 00:19:55.746 | 99.99th=[42206] 00:19:55.746 bw ( KiB/s): min= 1568, max= 9568, per=29.22%, avg=6418.29, stdev=3199.49, samples=7 00:19:55.746 iops : min= 392, max= 2392, avg=1604.57, stdev=799.87, samples=7 00:19:55.746 lat (usec) : 500=92.60%, 750=6.24%, 1000=0.53% 00:19:55.746 lat (msec) : 2=0.13%, 4=0.02%, 10=0.02%, 50=0.46% 00:19:55.746 cpu : usr=1.40%, sys=3.09%, ctx=6085, majf=0, minf=1 00:19:55.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.746 issued rwts: total=6077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.746 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1082338: Mon Jul 15 07:46:46 2024 00:19:55.746 read: IOPS=713, BW=2851KiB/s (2919kB/s)(9112KiB/3196msec) 00:19:55.746 slat (nsec): min=5771, max=70079, avg=12431.68, stdev=6976.34 00:19:55.747 clat (usec): min=298, max=41458, avg=1377.32, stdev=6078.43 00:19:55.747 lat (usec): min=304, max=41492, avg=1389.75, stdev=6079.87 00:19:55.747 clat percentiles (usec): 00:19:55.747 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:19:55.747 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 383], 00:19:55.747 | 70.00th=[ 445], 80.00th=[ 519], 90.00th=[ 848], 95.00th=[ 922], 00:19:55.747 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:19:55.747 | 99.99th=[41681] 00:19:55.747 bw ( KiB/s): min= 152, max= 5528, per=11.10%, avg=2438.67, stdev=2351.28, samples=6 00:19:55.747 iops : min= 38, max= 1382, avg=609.67, stdev=587.82, samples=6 00:19:55.747 lat (usec) : 500=77.05%, 750=11.06%, 1000=9.21% 00:19:55.747 lat (msec) : 2=0.26%, 10=0.04%, 20=0.04%, 50=2.28% 00:19:55.747 cpu : usr=0.44%, sys=1.44%, ctx=2280, majf=0, minf=1 00:19:55.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.747 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.747 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1082339: Mon Jul 15 07:46:46 2024 00:19:55.747 read: IOPS=1836, BW=7345KiB/s (7522kB/s)(20.9MiB/2915msec) 00:19:55.747 slat (nsec): min=5161, max=69262, avg=16620.19, stdev=10063.93 00:19:55.747 clat (usec): min=290, max=41954, avg=519.67, stdev=2078.41 00:19:55.747 lat (usec): min=298, max=41971, avg=536.30, stdev=2078.40 00:19:55.747 clat percentiles (usec): 00:19:55.747 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 359], 00:19:55.747 | 30.00th=[ 375], 40.00th=[ 392], 50.00th=[ 412], 60.00th=[ 420], 00:19:55.747 | 70.00th=[ 433], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 529], 00:19:55.747 | 99.00th=[ 619], 99.50th=[ 676], 99.90th=[41157], 99.95th=[41157], 00:19:55.747 | 99.99th=[42206] 00:19:55.747 bw ( KiB/s): min= 4288, max= 9672, per=35.81%, avg=7865.60, stdev=2273.49, samples=5 00:19:55.747 iops : min= 1072, max= 2418, avg=1966.40, stdev=568.37, samples=5 00:19:55.747 lat (usec) : 500=90.75%, 750=8.83%, 1000=0.07% 00:19:55.747 lat (msec) : 2=0.04%, 10=0.02%, 50=0.26% 00:19:55.747 cpu : usr=1.54%, sys=3.81%, ctx=5355, majf=0, minf=1 00:19:55.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.747 issued rwts: total=5354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.747 00:19:55.747 Run status group 0 (all jobs): 00:19:55.747 READ: bw=21.4MiB/s (22.5MB/s), 2851KiB/s-8214KiB/s (2919kB/s-8411kB/s), io=81.2MiB (85.2MB), run=2915-3788msec 00:19:55.747 00:19:55.747 Disk stats (read/write): 00:19:55.747 nvme0n1: ios=6944/0, merge=0/0, ticks=3617/0, in_queue=3617, util=98.51% 00:19:55.747 nvme0n2: ios=5824/0, merge=0/0, ticks=3327/0, in_queue=3327, util=94.64% 00:19:55.747 nvme0n3: ios=2148/0, merge=0/0, ticks=3670/0, in_queue=3670, util=99.66% 00:19:55.747 nvme0n4: ios=5319/0, merge=0/0, ticks=3765/0, in_queue=3765, util=99.63% 00:19:55.747 07:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.747 07:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:56.005 07:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.005 07:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:56.264 07:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.264 07:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:56.521 07:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.521 07:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:57.090 07:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.090 07:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:57.350 07:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:57.350 07:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1082248 00:19:57.350 07:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:57.350 07:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:57.982 nvmf hotplug test: fio failed as expected 00:19:57.982 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.239 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.239 rmmod nvme_tcp 00:19:58.498 rmmod nvme_fabrics 00:19:58.498 rmmod nvme_keyring 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1080094 ']' 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1080094 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1080094 ']' 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1080094 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1080094 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1080094' 00:19:58.498 killing process with pid 1080094 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1080094 00:19:58.498 07:46:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1080094 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.874 07:46:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.781 07:46:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.781 00:20:01.781 real 0m26.563s 00:20:01.781 user 1m31.782s 00:20:01.781 sys 0m7.182s 00:20:01.781 07:46:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.781 07:46:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.781 ************************************ 00:20:01.781 END TEST nvmf_fio_target 00:20:01.781 ************************************ 00:20:01.781 07:46:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:01.781 07:46:52 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:01.781 07:46:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:01.781 07:46:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.781 07:46:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.781 ************************************ 00:20:01.781 START TEST nvmf_bdevio 00:20:01.781 ************************************ 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:01.781 * Looking for test storage... 00:20:01.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.781 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.782 07:46:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:04.331 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:04.331 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:04.331 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:04.331 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.331 07:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.331 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:20:04.331 00:20:04.331 --- 10.0.0.2 ping statistics --- 00:20:04.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.332 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:20:04.332 00:20:04.332 --- 10.0.0.1 ping statistics --- 00:20:04.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.332 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1085218 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1085218 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1085218 ']' 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.332 07:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.332 [2024-07-15 07:46:55.246198] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:04.332 [2024-07-15 07:46:55.246359] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.332 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.332 [2024-07-15 07:46:55.401203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.590 [2024-07-15 07:46:55.672303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.590 [2024-07-15 07:46:55.672384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.590 [2024-07-15 07:46:55.672414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.590 [2024-07-15 07:46:55.672438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.590 [2024-07-15 07:46:55.672462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.590 [2024-07-15 07:46:55.675925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:04.590 [2024-07-15 07:46:55.676008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:04.590 [2024-07-15 07:46:55.676055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.590 [2024-07-15 07:46:55.676065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 [2024-07-15 07:46:56.192659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 Malloc0 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 [2024-07-15 07:46:56.301026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.157 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.157 { 00:20:05.157 "params": { 00:20:05.157 "name": "Nvme$subsystem", 00:20:05.157 "trtype": "$TEST_TRANSPORT", 00:20:05.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.157 "adrfam": "ipv4", 00:20:05.157 "trsvcid": "$NVMF_PORT", 00:20:05.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.158 "hdgst": ${hdgst:-false}, 00:20:05.158 "ddgst": ${ddgst:-false} 00:20:05.158 }, 00:20:05.158 "method": "bdev_nvme_attach_controller" 00:20:05.158 } 00:20:05.158 EOF 00:20:05.158 )") 00:20:05.158 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:05.158 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:05.158 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:05.158 07:46:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:05.158 "params": { 00:20:05.158 "name": "Nvme1", 00:20:05.158 "trtype": "tcp", 00:20:05.158 "traddr": "10.0.0.2", 00:20:05.158 "adrfam": "ipv4", 00:20:05.158 "trsvcid": "4420", 00:20:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.158 "hdgst": false, 00:20:05.158 "ddgst": false 00:20:05.158 }, 00:20:05.158 "method": "bdev_nvme_attach_controller" 00:20:05.158 }' 00:20:05.158 [2024-07-15 07:46:56.383360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:05.158 [2024-07-15 07:46:56.383501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085375 ] 00:20:05.417 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.418 [2024-07-15 07:46:56.509611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:05.678 [2024-07-15 07:46:56.755103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.678 [2024-07-15 07:46:56.755154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.678 [2024-07-15 07:46:56.755145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.244 I/O targets: 00:20:06.244 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:06.244 00:20:06.244 00:20:06.244 CUnit - A unit testing framework for C - Version 2.1-3 00:20:06.244 http://cunit.sourceforge.net/ 00:20:06.244 00:20:06.244 00:20:06.244 Suite: bdevio tests on: Nvme1n1 00:20:06.244 Test: blockdev write read block ...passed 00:20:06.244 Test: blockdev write zeroes read block ...passed 00:20:06.244 Test: blockdev write zeroes read no split ...passed 00:20:06.244 Test: blockdev write zeroes read split ...passed 00:20:06.244 Test: blockdev write zeroes read split partial ...passed 00:20:06.244 Test: blockdev reset ...[2024-07-15 07:46:57.350961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.244 [2024-07-15 07:46:57.351139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:20:06.245 [2024-07-15 07:46:57.364831] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:06.245 passed 00:20:06.245 Test: blockdev write read 8 blocks ...passed 00:20:06.245 Test: blockdev write read size > 128k ...passed 00:20:06.245 Test: blockdev write read invalid size ...passed 00:20:06.245 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:06.245 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:06.245 Test: blockdev write read max offset ...passed 00:20:06.503 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:06.503 Test: blockdev writev readv 8 blocks ...passed 00:20:06.503 Test: blockdev writev readv 30 x 1block ...passed 00:20:06.503 Test: blockdev writev readv block ...passed 00:20:06.503 Test: blockdev writev readv size > 128k ...passed 00:20:06.503 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:06.503 Test: blockdev comparev and writev ...[2024-07-15 07:46:57.585583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.585661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.585724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.585763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.586313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.586360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.586414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.586460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.586996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.587033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.587089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.587129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.587648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.587684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.587751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.503 [2024-07-15 07:46:57.587792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:06.503 passed 00:20:06.503 Test: blockdev nvme passthru rw ...passed 00:20:06.503 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:46:57.672391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.503 [2024-07-15 07:46:57.672451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.672757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.503 [2024-07-15 07:46:57.672795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.673095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.503 [2024-07-15 07:46:57.673133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:06.503 [2024-07-15 07:46:57.673438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.503 [2024-07-15 07:46:57.673474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:06.503 passed 00:20:06.503 Test: blockdev nvme admin passthru ...passed 00:20:06.761 Test: blockdev copy ...passed 00:20:06.761 00:20:06.761 Run Summary: Type Total Ran Passed Failed Inactive 00:20:06.761 suites 1 1 n/a 0 0 00:20:06.761 tests 23 23 23 0 0 00:20:06.761 asserts 152 152 152 0 n/a 00:20:06.761 00:20:06.761 Elapsed time = 1.155 seconds 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.699 rmmod nvme_tcp 00:20:07.699 rmmod nvme_fabrics 00:20:07.699 rmmod nvme_keyring 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.699 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1085218 ']' 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1085218 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1085218 ']' 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1085218 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085218 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085218' 00:20:07.700 killing process with pid 1085218 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1085218 00:20:07.700 07:46:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1085218 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.081 07:47:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.623 07:47:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:11.623 00:20:11.623 real 0m9.367s 00:20:11.623 user 0m21.756s 00:20:11.623 sys 0m2.455s 00:20:11.623 07:47:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.623 07:47:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.623 ************************************ 00:20:11.623 END TEST nvmf_bdevio 00:20:11.623 ************************************ 00:20:11.623 07:47:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:11.623 07:47:02 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:11.623 07:47:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:11.623 07:47:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.623 07:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.623 ************************************ 00:20:11.623 START TEST nvmf_auth_target 00:20:11.623 ************************************ 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:11.623 * Looking for test storage... 00:20:11.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:11.623 07:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:13.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:13.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:13.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.540 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:13.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:20:13.541 00:20:13.541 --- 10.0.0.2 ping statistics --- 00:20:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.541 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:20:13.541 00:20:13.541 --- 10.0.0.1 ping statistics --- 00:20:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.541 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1087705 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1087705 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1087705 ']' 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.541 07:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1087855 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=178d973373ee65b5cef3380c722c87ef0f5b15784e34eff0 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.T0E 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 178d973373ee65b5cef3380c722c87ef0f5b15784e34eff0 0 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 178d973373ee65b5cef3380c722c87ef0f5b15784e34eff0 0 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=178d973373ee65b5cef3380c722c87ef0f5b15784e34eff0 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.T0E 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.T0E 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.T0E 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=63e887c7c34f0ce1c701793feed963f6e54926020ed23b33fa6a08ea08abc888 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aC6 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 63e887c7c34f0ce1c701793feed963f6e54926020ed23b33fa6a08ea08abc888 3 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 63e887c7c34f0ce1c701793feed963f6e54926020ed23b33fa6a08ea08abc888 3 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=63e887c7c34f0ce1c701793feed963f6e54926020ed23b33fa6a08ea08abc888 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aC6 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aC6 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.aC6 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=58274bbf8cd0498c6d7108f13d4f754f 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.e7H 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 58274bbf8cd0498c6d7108f13d4f754f 1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 58274bbf8cd0498c6d7108f13d4f754f 1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=58274bbf8cd0498c6d7108f13d4f754f 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.e7H 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.e7H 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.e7H 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=420eded23a8d49c57e083e0ab493fddc8049322c7c34bd9e 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5lG 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 420eded23a8d49c57e083e0ab493fddc8049322c7c34bd9e 2 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 420eded23a8d49c57e083e0ab493fddc8049322c7c34bd9e 2 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=420eded23a8d49c57e083e0ab493fddc8049322c7c34bd9e 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5lG 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5lG 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.5lG 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=83e71a4d6215a746abccf4b1a17bbc78517124c3bea3d5b6 00:20:14.477 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iT1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 83e71a4d6215a746abccf4b1a17bbc78517124c3bea3d5b6 2 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 83e71a4d6215a746abccf4b1a17bbc78517124c3bea3d5b6 2 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=83e71a4d6215a746abccf4b1a17bbc78517124c3bea3d5b6 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iT1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iT1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.iT1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6e05ced85d0f2d809ed9c2e0cd05a90a 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HQW 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6e05ced85d0f2d809ed9c2e0cd05a90a 1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6e05ced85d0f2d809ed9c2e0cd05a90a 1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6e05ced85d0f2d809ed9c2e0cd05a90a 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HQW 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HQW 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.HQW 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=244b612d9a52a0894069a55367e84ba37c46d2aa5f0d25b3d4c31bda3b064f11 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ITw 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 244b612d9a52a0894069a55367e84ba37c46d2aa5f0d25b3d4c31bda3b064f11 3 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 244b612d9a52a0894069a55367e84ba37c46d2aa5f0d25b3d4c31bda3b064f11 3 00:20:14.737 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=244b612d9a52a0894069a55367e84ba37c46d2aa5f0d25b3d4c31bda3b064f11 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ITw 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ITw 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ITw 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1087705 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1087705 ']' 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.738 07:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1087855 /var/tmp/host.sock 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1087855 ']' 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:15.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.029 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.T0E 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.598 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.855 07:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.855 07:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.T0E 00:20:15.855 07:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.T0E 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.aC6 ]] 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aC6 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aC6 00:20:16.113 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aC6 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.e7H 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.e7H 00:20:16.371 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.e7H 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.5lG ]] 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5lG 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5lG 00:20:16.629 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5lG 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iT1 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iT1 00:20:16.893 07:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iT1 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.HQW ]] 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HQW 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HQW 00:20:17.149 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HQW 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ITw 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ITw 00:20:17.406 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ITw 00:20:17.663 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:17.663 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:17.663 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.663 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.663 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.663 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.920 07:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.179 00:20:18.179 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.179 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.179 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.437 { 00:20:18.437 "cntlid": 1, 00:20:18.437 "qid": 0, 00:20:18.437 "state": "enabled", 00:20:18.437 "thread": "nvmf_tgt_poll_group_000", 00:20:18.437 "listen_address": { 00:20:18.437 "trtype": "TCP", 00:20:18.437 "adrfam": "IPv4", 00:20:18.437 "traddr": "10.0.0.2", 00:20:18.437 "trsvcid": "4420" 00:20:18.437 }, 00:20:18.437 "peer_address": { 00:20:18.437 "trtype": "TCP", 00:20:18.437 "adrfam": "IPv4", 00:20:18.437 "traddr": "10.0.0.1", 00:20:18.437 "trsvcid": "44910" 00:20:18.437 }, 00:20:18.437 "auth": { 00:20:18.437 "state": "completed", 00:20:18.437 "digest": "sha256", 00:20:18.437 "dhgroup": "null" 00:20:18.437 } 00:20:18.437 } 00:20:18.437 ]' 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.437 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.697 07:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:20:19.634 07:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.891 07:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.149 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.406 00:20:20.406 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.406 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.406 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.663 { 00:20:20.663 "cntlid": 3, 00:20:20.663 "qid": 0, 00:20:20.663 "state": "enabled", 00:20:20.663 "thread": "nvmf_tgt_poll_group_000", 00:20:20.663 "listen_address": { 00:20:20.663 "trtype": "TCP", 00:20:20.663 "adrfam": "IPv4", 00:20:20.663 "traddr": "10.0.0.2", 00:20:20.663 "trsvcid": "4420" 00:20:20.663 }, 00:20:20.663 "peer_address": { 00:20:20.663 "trtype": "TCP", 00:20:20.663 "adrfam": "IPv4", 00:20:20.663 "traddr": "10.0.0.1", 00:20:20.663 "trsvcid": "39648" 00:20:20.663 }, 00:20:20.663 "auth": { 00:20:20.663 "state": "completed", 00:20:20.663 "digest": "sha256", 00:20:20.663 "dhgroup": "null" 00:20:20.663 } 00:20:20.663 } 00:20:20.663 ]' 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.663 07:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.922 07:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:20:21.855 07:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.855 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.112 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.371 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.630 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.630 { 00:20:22.630 "cntlid": 5, 00:20:22.630 "qid": 0, 00:20:22.630 "state": "enabled", 00:20:22.630 "thread": "nvmf_tgt_poll_group_000", 00:20:22.630 "listen_address": { 00:20:22.630 "trtype": "TCP", 00:20:22.630 "adrfam": "IPv4", 00:20:22.630 "traddr": "10.0.0.2", 00:20:22.630 "trsvcid": "4420" 00:20:22.630 }, 00:20:22.630 "peer_address": { 00:20:22.630 "trtype": "TCP", 00:20:22.630 "adrfam": "IPv4", 00:20:22.630 "traddr": "10.0.0.1", 00:20:22.630 "trsvcid": "39676" 00:20:22.630 }, 00:20:22.630 "auth": { 00:20:22.630 "state": "completed", 00:20:22.630 "digest": "sha256", 00:20:22.630 "dhgroup": "null" 00:20:22.630 } 00:20:22.630 } 00:20:22.630 ]' 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.889 07:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.147 07:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.084 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.342 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.910 00:20:24.910 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.910 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.910 07:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.910 { 00:20:24.910 "cntlid": 7, 00:20:24.910 "qid": 0, 00:20:24.910 "state": "enabled", 00:20:24.910 "thread": "nvmf_tgt_poll_group_000", 00:20:24.910 "listen_address": { 00:20:24.910 "trtype": "TCP", 00:20:24.910 "adrfam": "IPv4", 00:20:24.910 "traddr": "10.0.0.2", 00:20:24.910 "trsvcid": "4420" 00:20:24.910 }, 00:20:24.910 "peer_address": { 00:20:24.910 "trtype": "TCP", 00:20:24.910 "adrfam": "IPv4", 00:20:24.910 "traddr": "10.0.0.1", 00:20:24.910 "trsvcid": "39698" 00:20:24.910 }, 00:20:24.910 "auth": { 00:20:24.910 "state": "completed", 00:20:24.910 "digest": "sha256", 00:20:24.910 "dhgroup": "null" 00:20:24.910 } 00:20:24.910 } 00:20:24.910 ]' 00:20:24.910 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.168 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.427 07:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.365 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.623 07:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.881 00:20:26.881 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.881 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.881 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.139 { 00:20:27.139 "cntlid": 9, 00:20:27.139 "qid": 0, 00:20:27.139 "state": "enabled", 00:20:27.139 "thread": "nvmf_tgt_poll_group_000", 00:20:27.139 "listen_address": { 00:20:27.139 "trtype": "TCP", 00:20:27.139 "adrfam": "IPv4", 00:20:27.139 "traddr": "10.0.0.2", 00:20:27.139 "trsvcid": "4420" 00:20:27.139 }, 00:20:27.139 "peer_address": { 00:20:27.139 "trtype": "TCP", 00:20:27.139 "adrfam": "IPv4", 00:20:27.139 "traddr": "10.0.0.1", 00:20:27.139 "trsvcid": "39724" 00:20:27.139 }, 00:20:27.139 "auth": { 00:20:27.139 "state": "completed", 00:20:27.139 "digest": "sha256", 00:20:27.139 "dhgroup": "ffdhe2048" 00:20:27.139 } 00:20:27.139 } 00:20:27.139 ]' 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.139 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.401 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.401 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.401 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.694 07:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:28.628 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.887 07:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.145 00:20:29.146 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.146 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.146 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.404 { 00:20:29.404 "cntlid": 11, 00:20:29.404 "qid": 0, 00:20:29.404 "state": "enabled", 00:20:29.404 "thread": "nvmf_tgt_poll_group_000", 00:20:29.404 "listen_address": { 00:20:29.404 "trtype": "TCP", 00:20:29.404 "adrfam": "IPv4", 00:20:29.404 "traddr": "10.0.0.2", 00:20:29.404 "trsvcid": "4420" 00:20:29.404 }, 00:20:29.404 "peer_address": { 00:20:29.404 "trtype": "TCP", 00:20:29.404 "adrfam": "IPv4", 00:20:29.404 "traddr": "10.0.0.1", 00:20:29.404 "trsvcid": "39746" 00:20:29.404 }, 00:20:29.404 "auth": { 00:20:29.404 "state": "completed", 00:20:29.404 "digest": "sha256", 00:20:29.404 "dhgroup": "ffdhe2048" 00:20:29.404 } 00:20:29.404 } 00:20:29.404 ]' 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.404 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.664 07:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.602 07:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.170 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.428 00:20:31.428 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.428 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.428 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.686 { 00:20:31.686 "cntlid": 13, 00:20:31.686 "qid": 0, 00:20:31.686 "state": "enabled", 00:20:31.686 "thread": "nvmf_tgt_poll_group_000", 00:20:31.686 "listen_address": { 00:20:31.686 "trtype": "TCP", 00:20:31.686 "adrfam": "IPv4", 00:20:31.686 "traddr": "10.0.0.2", 00:20:31.686 "trsvcid": "4420" 00:20:31.686 }, 00:20:31.686 "peer_address": { 00:20:31.686 "trtype": "TCP", 00:20:31.686 "adrfam": "IPv4", 00:20:31.686 "traddr": "10.0.0.1", 00:20:31.686 "trsvcid": "43370" 00:20:31.686 }, 00:20:31.686 "auth": { 00:20:31.686 "state": "completed", 00:20:31.686 "digest": "sha256", 00:20:31.686 "dhgroup": "ffdhe2048" 00:20:31.686 } 00:20:31.686 } 00:20:31.686 ]' 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.686 07:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.946 07:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:33.330 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.331 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.589 00:20:33.589 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.589 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.589 07:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.847 { 00:20:33.847 "cntlid": 15, 00:20:33.847 "qid": 0, 00:20:33.847 "state": "enabled", 00:20:33.847 "thread": "nvmf_tgt_poll_group_000", 00:20:33.847 "listen_address": { 00:20:33.847 "trtype": "TCP", 00:20:33.847 "adrfam": "IPv4", 00:20:33.847 "traddr": "10.0.0.2", 00:20:33.847 "trsvcid": "4420" 00:20:33.847 }, 00:20:33.847 "peer_address": { 00:20:33.847 "trtype": "TCP", 00:20:33.847 "adrfam": "IPv4", 00:20:33.847 "traddr": "10.0.0.1", 00:20:33.847 "trsvcid": "43402" 00:20:33.847 }, 00:20:33.847 "auth": { 00:20:33.847 "state": "completed", 00:20:33.847 "digest": "sha256", 00:20:33.847 "dhgroup": "ffdhe2048" 00:20:33.847 } 00:20:33.847 } 00:20:33.847 ]' 00:20:33.847 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.105 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.362 07:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.298 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.557 07:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.817 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.076 07:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.334 { 00:20:36.334 "cntlid": 17, 00:20:36.334 "qid": 0, 00:20:36.334 "state": "enabled", 00:20:36.334 "thread": "nvmf_tgt_poll_group_000", 00:20:36.334 "listen_address": { 00:20:36.334 "trtype": "TCP", 00:20:36.334 "adrfam": "IPv4", 00:20:36.334 "traddr": "10.0.0.2", 00:20:36.334 "trsvcid": "4420" 00:20:36.334 }, 00:20:36.334 "peer_address": { 00:20:36.334 "trtype": "TCP", 00:20:36.334 "adrfam": "IPv4", 00:20:36.334 "traddr": "10.0.0.1", 00:20:36.334 "trsvcid": "43426" 00:20:36.334 }, 00:20:36.334 "auth": { 00:20:36.334 "state": "completed", 00:20:36.334 "digest": "sha256", 00:20:36.334 "dhgroup": "ffdhe3072" 00:20:36.334 } 00:20:36.334 } 00:20:36.334 ]' 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.334 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.592 07:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.527 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.785 07:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.043 00:20:38.043 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.043 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.043 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.301 { 00:20:38.301 "cntlid": 19, 00:20:38.301 "qid": 0, 00:20:38.301 "state": "enabled", 00:20:38.301 "thread": "nvmf_tgt_poll_group_000", 00:20:38.301 "listen_address": { 00:20:38.301 "trtype": "TCP", 00:20:38.301 "adrfam": "IPv4", 00:20:38.301 "traddr": "10.0.0.2", 00:20:38.301 "trsvcid": "4420" 00:20:38.301 }, 00:20:38.301 "peer_address": { 00:20:38.301 "trtype": "TCP", 00:20:38.301 "adrfam": "IPv4", 00:20:38.301 "traddr": "10.0.0.1", 00:20:38.301 "trsvcid": "43448" 00:20:38.301 }, 00:20:38.301 "auth": { 00:20:38.301 "state": "completed", 00:20:38.301 "digest": "sha256", 00:20:38.301 "dhgroup": "ffdhe3072" 00:20:38.301 } 00:20:38.301 } 00:20:38.301 ]' 00:20:38.301 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.558 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.816 07:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.749 07:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.316 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.574 00:20:40.574 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.574 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.574 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.832 { 00:20:40.832 "cntlid": 21, 00:20:40.832 "qid": 0, 00:20:40.832 "state": "enabled", 00:20:40.832 "thread": "nvmf_tgt_poll_group_000", 00:20:40.832 "listen_address": { 00:20:40.832 "trtype": "TCP", 00:20:40.832 "adrfam": "IPv4", 00:20:40.832 "traddr": "10.0.0.2", 00:20:40.832 "trsvcid": "4420" 00:20:40.832 }, 00:20:40.832 "peer_address": { 00:20:40.832 "trtype": "TCP", 00:20:40.832 "adrfam": "IPv4", 00:20:40.832 "traddr": "10.0.0.1", 00:20:40.832 "trsvcid": "32952" 00:20:40.832 }, 00:20:40.832 "auth": { 00:20:40.832 "state": "completed", 00:20:40.832 "digest": "sha256", 00:20:40.832 "dhgroup": "ffdhe3072" 00:20:40.832 } 00:20:40.832 } 00:20:40.832 ]' 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.832 07:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.090 07:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.050 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.307 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.873 00:20:42.873 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.873 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.873 07:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.131 { 00:20:43.131 "cntlid": 23, 00:20:43.131 "qid": 0, 00:20:43.131 "state": "enabled", 00:20:43.131 "thread": "nvmf_tgt_poll_group_000", 00:20:43.131 "listen_address": { 00:20:43.131 "trtype": "TCP", 00:20:43.131 "adrfam": "IPv4", 00:20:43.131 "traddr": "10.0.0.2", 00:20:43.131 "trsvcid": "4420" 00:20:43.131 }, 00:20:43.131 "peer_address": { 00:20:43.131 "trtype": "TCP", 00:20:43.131 "adrfam": "IPv4", 00:20:43.131 "traddr": "10.0.0.1", 00:20:43.131 "trsvcid": "32984" 00:20:43.131 }, 00:20:43.131 "auth": { 00:20:43.131 "state": "completed", 00:20:43.131 "digest": "sha256", 00:20:43.131 "dhgroup": "ffdhe3072" 00:20:43.131 } 00:20:43.131 } 00:20:43.131 ]' 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.131 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.389 07:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.325 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.583 07:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.149 00:20:45.149 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.149 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.149 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.407 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.407 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.407 07:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.407 07:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.407 07:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.407 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.407 { 00:20:45.407 "cntlid": 25, 00:20:45.407 "qid": 0, 00:20:45.407 "state": "enabled", 00:20:45.407 "thread": "nvmf_tgt_poll_group_000", 00:20:45.407 "listen_address": { 00:20:45.407 "trtype": "TCP", 00:20:45.407 "adrfam": "IPv4", 00:20:45.407 "traddr": "10.0.0.2", 00:20:45.407 "trsvcid": "4420" 00:20:45.407 }, 00:20:45.407 "peer_address": { 00:20:45.407 "trtype": "TCP", 00:20:45.407 "adrfam": "IPv4", 00:20:45.407 "traddr": "10.0.0.1", 00:20:45.407 "trsvcid": "33000" 00:20:45.407 }, 00:20:45.408 "auth": { 00:20:45.408 "state": "completed", 00:20:45.408 "digest": "sha256", 00:20:45.408 "dhgroup": "ffdhe4096" 00:20:45.408 } 00:20:45.408 } 00:20:45.408 ]' 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.408 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.665 07:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.601 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.860 07:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.427 00:20:47.427 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.427 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.427 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.684 { 00:20:47.684 "cntlid": 27, 00:20:47.684 "qid": 0, 00:20:47.684 "state": "enabled", 00:20:47.684 "thread": "nvmf_tgt_poll_group_000", 00:20:47.684 "listen_address": { 00:20:47.684 "trtype": "TCP", 00:20:47.684 "adrfam": "IPv4", 00:20:47.684 "traddr": "10.0.0.2", 00:20:47.684 "trsvcid": "4420" 00:20:47.684 }, 00:20:47.684 "peer_address": { 00:20:47.684 "trtype": "TCP", 00:20:47.684 "adrfam": "IPv4", 00:20:47.684 "traddr": "10.0.0.1", 00:20:47.684 "trsvcid": "33026" 00:20:47.684 }, 00:20:47.684 "auth": { 00:20:47.684 "state": "completed", 00:20:47.684 "digest": "sha256", 00:20:47.684 "dhgroup": "ffdhe4096" 00:20:47.684 } 00:20:47.684 } 00:20:47.684 ]' 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.684 07:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.944 07:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:20:48.880 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.145 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.145 07:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.145 07:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.145 07:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.145 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.145 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.146 07:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.403 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.403 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.660 00:20:49.660 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.660 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.660 07:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.917 { 00:20:49.917 "cntlid": 29, 00:20:49.917 "qid": 0, 00:20:49.917 "state": "enabled", 00:20:49.917 "thread": "nvmf_tgt_poll_group_000", 00:20:49.917 "listen_address": { 00:20:49.917 "trtype": "TCP", 00:20:49.917 "adrfam": "IPv4", 00:20:49.917 "traddr": "10.0.0.2", 00:20:49.917 "trsvcid": "4420" 00:20:49.917 }, 00:20:49.917 "peer_address": { 00:20:49.917 "trtype": "TCP", 00:20:49.917 "adrfam": "IPv4", 00:20:49.917 "traddr": "10.0.0.1", 00:20:49.917 "trsvcid": "33064" 00:20:49.917 }, 00:20:49.917 "auth": { 00:20:49.917 "state": "completed", 00:20:49.917 "digest": "sha256", 00:20:49.917 "dhgroup": "ffdhe4096" 00:20:49.917 } 00:20:49.917 } 00:20:49.917 ]' 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.917 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.480 07:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.413 07:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.977 00:20:51.977 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.977 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.977 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.234 { 00:20:52.234 "cntlid": 31, 00:20:52.234 "qid": 0, 00:20:52.234 "state": "enabled", 00:20:52.234 "thread": "nvmf_tgt_poll_group_000", 00:20:52.234 "listen_address": { 00:20:52.234 "trtype": "TCP", 00:20:52.234 "adrfam": "IPv4", 00:20:52.234 "traddr": "10.0.0.2", 00:20:52.234 "trsvcid": "4420" 00:20:52.234 }, 00:20:52.234 "peer_address": { 00:20:52.234 "trtype": "TCP", 00:20:52.234 "adrfam": "IPv4", 00:20:52.234 "traddr": "10.0.0.1", 00:20:52.234 "trsvcid": "53570" 00:20:52.234 }, 00:20:52.234 "auth": { 00:20:52.234 "state": "completed", 00:20:52.234 "digest": "sha256", 00:20:52.234 "dhgroup": "ffdhe4096" 00:20:52.234 } 00:20:52.234 } 00:20:52.234 ]' 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.234 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.493 07:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.429 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.996 07:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.563 00:20:54.563 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.563 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.563 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.821 { 00:20:54.821 "cntlid": 33, 00:20:54.821 "qid": 0, 00:20:54.821 "state": "enabled", 00:20:54.821 "thread": "nvmf_tgt_poll_group_000", 00:20:54.821 "listen_address": { 00:20:54.821 "trtype": "TCP", 00:20:54.821 "adrfam": "IPv4", 00:20:54.821 "traddr": "10.0.0.2", 00:20:54.821 "trsvcid": "4420" 00:20:54.821 }, 00:20:54.821 "peer_address": { 00:20:54.821 "trtype": "TCP", 00:20:54.821 "adrfam": "IPv4", 00:20:54.821 "traddr": "10.0.0.1", 00:20:54.821 "trsvcid": "53606" 00:20:54.821 }, 00:20:54.821 "auth": { 00:20:54.821 "state": "completed", 00:20:54.821 "digest": "sha256", 00:20:54.821 "dhgroup": "ffdhe6144" 00:20:54.821 } 00:20:54.821 } 00:20:54.821 ]' 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.821 07:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.078 07:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.068 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.327 07:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.894 00:20:56.894 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.894 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.894 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.152 { 00:20:57.152 "cntlid": 35, 00:20:57.152 "qid": 0, 00:20:57.152 "state": "enabled", 00:20:57.152 "thread": "nvmf_tgt_poll_group_000", 00:20:57.152 "listen_address": { 00:20:57.152 "trtype": "TCP", 00:20:57.152 "adrfam": "IPv4", 00:20:57.152 "traddr": "10.0.0.2", 00:20:57.152 "trsvcid": "4420" 00:20:57.152 }, 00:20:57.152 "peer_address": { 00:20:57.152 "trtype": "TCP", 00:20:57.152 "adrfam": "IPv4", 00:20:57.152 "traddr": "10.0.0.1", 00:20:57.152 "trsvcid": "53638" 00:20:57.152 }, 00:20:57.152 "auth": { 00:20:57.152 "state": "completed", 00:20:57.152 "digest": "sha256", 00:20:57.152 "dhgroup": "ffdhe6144" 00:20:57.152 } 00:20:57.152 } 00:20:57.152 ]' 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.152 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.410 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.410 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.410 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.410 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.410 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.668 07:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.604 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.862 07:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.430 00:20:59.430 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.430 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.430 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.689 { 00:20:59.689 "cntlid": 37, 00:20:59.689 "qid": 0, 00:20:59.689 "state": "enabled", 00:20:59.689 "thread": "nvmf_tgt_poll_group_000", 00:20:59.689 "listen_address": { 00:20:59.689 "trtype": "TCP", 00:20:59.689 "adrfam": "IPv4", 00:20:59.689 "traddr": "10.0.0.2", 00:20:59.689 "trsvcid": "4420" 00:20:59.689 }, 00:20:59.689 "peer_address": { 00:20:59.689 "trtype": "TCP", 00:20:59.689 "adrfam": "IPv4", 00:20:59.689 "traddr": "10.0.0.1", 00:20:59.689 "trsvcid": "53666" 00:20:59.689 }, 00:20:59.689 "auth": { 00:20:59.689 "state": "completed", 00:20:59.689 "digest": "sha256", 00:20:59.689 "dhgroup": "ffdhe6144" 00:20:59.689 } 00:20:59.689 } 00:20:59.689 ]' 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.689 07:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.949 07:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:00.882 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.141 07:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.142 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.142 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.707 00:21:01.707 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.707 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.707 07:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.965 { 00:21:01.965 "cntlid": 39, 00:21:01.965 "qid": 0, 00:21:01.965 "state": "enabled", 00:21:01.965 "thread": "nvmf_tgt_poll_group_000", 00:21:01.965 "listen_address": { 00:21:01.965 "trtype": "TCP", 00:21:01.965 "adrfam": "IPv4", 00:21:01.965 "traddr": "10.0.0.2", 00:21:01.965 "trsvcid": "4420" 00:21:01.965 }, 00:21:01.965 "peer_address": { 00:21:01.965 "trtype": "TCP", 00:21:01.965 "adrfam": "IPv4", 00:21:01.965 "traddr": "10.0.0.1", 00:21:01.965 "trsvcid": "45880" 00:21:01.965 }, 00:21:01.965 "auth": { 00:21:01.965 "state": "completed", 00:21:01.965 "digest": "sha256", 00:21:01.965 "dhgroup": "ffdhe6144" 00:21:01.965 } 00:21:01.965 } 00:21:01.965 ]' 00:21:01.965 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.223 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.481 07:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.418 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.677 07:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.614 00:21:04.614 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.614 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.614 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.871 { 00:21:04.871 "cntlid": 41, 00:21:04.871 "qid": 0, 00:21:04.871 "state": "enabled", 00:21:04.871 "thread": "nvmf_tgt_poll_group_000", 00:21:04.871 "listen_address": { 00:21:04.871 "trtype": "TCP", 00:21:04.871 "adrfam": "IPv4", 00:21:04.871 "traddr": "10.0.0.2", 00:21:04.871 "trsvcid": "4420" 00:21:04.871 }, 00:21:04.871 "peer_address": { 00:21:04.871 "trtype": "TCP", 00:21:04.871 "adrfam": "IPv4", 00:21:04.871 "traddr": "10.0.0.1", 00:21:04.871 "trsvcid": "45904" 00:21:04.871 }, 00:21:04.871 "auth": { 00:21:04.871 "state": "completed", 00:21:04.871 "digest": "sha256", 00:21:04.871 "dhgroup": "ffdhe8192" 00:21:04.871 } 00:21:04.871 } 00:21:04.871 ]' 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.871 07:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.871 07:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.871 07:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.871 07:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.871 07:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.871 07:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.130 07:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:06.065 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.323 07:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.261 00:21:07.261 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.261 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.261 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.519 { 00:21:07.519 "cntlid": 43, 00:21:07.519 "qid": 0, 00:21:07.519 "state": "enabled", 00:21:07.519 "thread": "nvmf_tgt_poll_group_000", 00:21:07.519 "listen_address": { 00:21:07.519 "trtype": "TCP", 00:21:07.519 "adrfam": "IPv4", 00:21:07.519 "traddr": "10.0.0.2", 00:21:07.519 "trsvcid": "4420" 00:21:07.519 }, 00:21:07.519 "peer_address": { 00:21:07.519 "trtype": "TCP", 00:21:07.519 "adrfam": "IPv4", 00:21:07.519 "traddr": "10.0.0.1", 00:21:07.519 "trsvcid": "45932" 00:21:07.519 }, 00:21:07.519 "auth": { 00:21:07.519 "state": "completed", 00:21:07.519 "digest": "sha256", 00:21:07.519 "dhgroup": "ffdhe8192" 00:21:07.519 } 00:21:07.519 } 00:21:07.519 ]' 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.519 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.777 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.777 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.777 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.777 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.777 07:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.035 07:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:08.969 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.227 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.228 07:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.192 00:21:10.192 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.192 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.192 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.451 { 00:21:10.451 "cntlid": 45, 00:21:10.451 "qid": 0, 00:21:10.451 "state": "enabled", 00:21:10.451 "thread": "nvmf_tgt_poll_group_000", 00:21:10.451 "listen_address": { 00:21:10.451 "trtype": "TCP", 00:21:10.451 "adrfam": "IPv4", 00:21:10.451 "traddr": "10.0.0.2", 00:21:10.451 "trsvcid": "4420" 00:21:10.451 }, 00:21:10.451 "peer_address": { 00:21:10.451 "trtype": "TCP", 00:21:10.451 "adrfam": "IPv4", 00:21:10.451 "traddr": "10.0.0.1", 00:21:10.451 "trsvcid": "45956" 00:21:10.451 }, 00:21:10.451 "auth": { 00:21:10.451 "state": "completed", 00:21:10.451 "digest": "sha256", 00:21:10.451 "dhgroup": "ffdhe8192" 00:21:10.451 } 00:21:10.451 } 00:21:10.451 ]' 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.451 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.709 07:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.644 07:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.902 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.852 00:21:12.852 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.852 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.852 07:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.110 { 00:21:13.110 "cntlid": 47, 00:21:13.110 "qid": 0, 00:21:13.110 "state": "enabled", 00:21:13.110 "thread": "nvmf_tgt_poll_group_000", 00:21:13.110 "listen_address": { 00:21:13.110 "trtype": "TCP", 00:21:13.110 "adrfam": "IPv4", 00:21:13.110 "traddr": "10.0.0.2", 00:21:13.110 "trsvcid": "4420" 00:21:13.110 }, 00:21:13.110 "peer_address": { 00:21:13.110 "trtype": "TCP", 00:21:13.110 "adrfam": "IPv4", 00:21:13.110 "traddr": "10.0.0.1", 00:21:13.110 "trsvcid": "40308" 00:21:13.110 }, 00:21:13.110 "auth": { 00:21:13.110 "state": "completed", 00:21:13.110 "digest": "sha256", 00:21:13.110 "dhgroup": "ffdhe8192" 00:21:13.110 } 00:21:13.110 } 00:21:13.110 ]' 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.110 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.111 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.111 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.111 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.111 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.678 07:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:14.615 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.872 07:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.130 00:21:15.130 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.130 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.130 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.387 { 00:21:15.387 "cntlid": 49, 00:21:15.387 "qid": 0, 00:21:15.387 "state": "enabled", 00:21:15.387 "thread": "nvmf_tgt_poll_group_000", 00:21:15.387 "listen_address": { 00:21:15.387 "trtype": "TCP", 00:21:15.387 "adrfam": "IPv4", 00:21:15.387 "traddr": "10.0.0.2", 00:21:15.387 "trsvcid": "4420" 00:21:15.387 }, 00:21:15.387 "peer_address": { 00:21:15.387 "trtype": "TCP", 00:21:15.387 "adrfam": "IPv4", 00:21:15.387 "traddr": "10.0.0.1", 00:21:15.387 "trsvcid": "40346" 00:21:15.387 }, 00:21:15.387 "auth": { 00:21:15.387 "state": "completed", 00:21:15.387 "digest": "sha384", 00:21:15.387 "dhgroup": "null" 00:21:15.387 } 00:21:15.387 } 00:21:15.387 ]' 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:15.387 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.644 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.644 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.644 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.902 07:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.832 07:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.090 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.348 00:21:17.348 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.348 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.348 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.606 { 00:21:17.606 "cntlid": 51, 00:21:17.606 "qid": 0, 00:21:17.606 "state": "enabled", 00:21:17.606 "thread": "nvmf_tgt_poll_group_000", 00:21:17.606 "listen_address": { 00:21:17.606 "trtype": "TCP", 00:21:17.606 "adrfam": "IPv4", 00:21:17.606 "traddr": "10.0.0.2", 00:21:17.606 "trsvcid": "4420" 00:21:17.606 }, 00:21:17.606 "peer_address": { 00:21:17.606 "trtype": "TCP", 00:21:17.606 "adrfam": "IPv4", 00:21:17.606 "traddr": "10.0.0.1", 00:21:17.606 "trsvcid": "40378" 00:21:17.606 }, 00:21:17.606 "auth": { 00:21:17.606 "state": "completed", 00:21:17.606 "digest": "sha384", 00:21:17.606 "dhgroup": "null" 00:21:17.606 } 00:21:17.606 } 00:21:17.606 ]' 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:17.606 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.864 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.864 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.864 07:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.124 07:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.062 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.322 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.581 00:21:19.581 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.581 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.581 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.839 { 00:21:19.839 "cntlid": 53, 00:21:19.839 "qid": 0, 00:21:19.839 "state": "enabled", 00:21:19.839 "thread": "nvmf_tgt_poll_group_000", 00:21:19.839 "listen_address": { 00:21:19.839 "trtype": "TCP", 00:21:19.839 "adrfam": "IPv4", 00:21:19.839 "traddr": "10.0.0.2", 00:21:19.839 "trsvcid": "4420" 00:21:19.839 }, 00:21:19.839 "peer_address": { 00:21:19.839 "trtype": "TCP", 00:21:19.839 "adrfam": "IPv4", 00:21:19.839 "traddr": "10.0.0.1", 00:21:19.839 "trsvcid": "40410" 00:21:19.839 }, 00:21:19.839 "auth": { 00:21:19.839 "state": "completed", 00:21:19.839 "digest": "sha384", 00:21:19.839 "dhgroup": "null" 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ]' 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.839 07:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.099 07:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.038 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.296 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.554 00:21:21.554 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.554 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.554 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.840 { 00:21:21.840 "cntlid": 55, 00:21:21.840 "qid": 0, 00:21:21.840 "state": "enabled", 00:21:21.840 "thread": "nvmf_tgt_poll_group_000", 00:21:21.840 "listen_address": { 00:21:21.840 "trtype": "TCP", 00:21:21.840 "adrfam": "IPv4", 00:21:21.840 "traddr": "10.0.0.2", 00:21:21.840 "trsvcid": "4420" 00:21:21.840 }, 00:21:21.840 "peer_address": { 00:21:21.840 "trtype": "TCP", 00:21:21.840 "adrfam": "IPv4", 00:21:21.840 "traddr": "10.0.0.1", 00:21:21.840 "trsvcid": "42804" 00:21:21.840 }, 00:21:21.840 "auth": { 00:21:21.840 "state": "completed", 00:21:21.840 "digest": "sha384", 00:21:21.840 "dhgroup": "null" 00:21:21.840 } 00:21:21.840 } 00:21:21.840 ]' 00:21:21.840 07:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.840 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.840 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.097 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:22.097 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.097 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.097 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.097 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.353 07:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.283 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.539 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.826 00:21:23.826 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.826 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.826 07:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.084 { 00:21:24.084 "cntlid": 57, 00:21:24.084 "qid": 0, 00:21:24.084 "state": "enabled", 00:21:24.084 "thread": "nvmf_tgt_poll_group_000", 00:21:24.084 "listen_address": { 00:21:24.084 "trtype": "TCP", 00:21:24.084 "adrfam": "IPv4", 00:21:24.084 "traddr": "10.0.0.2", 00:21:24.084 "trsvcid": "4420" 00:21:24.084 }, 00:21:24.084 "peer_address": { 00:21:24.084 "trtype": "TCP", 00:21:24.084 "adrfam": "IPv4", 00:21:24.084 "traddr": "10.0.0.1", 00:21:24.084 "trsvcid": "42832" 00:21:24.084 }, 00:21:24.084 "auth": { 00:21:24.084 "state": "completed", 00:21:24.084 "digest": "sha384", 00:21:24.084 "dhgroup": "ffdhe2048" 00:21:24.084 } 00:21:24.084 } 00:21:24.084 ]' 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.084 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.341 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.341 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.341 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.596 07:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.532 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.790 07:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.049 00:21:26.049 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.049 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.049 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.307 { 00:21:26.307 "cntlid": 59, 00:21:26.307 "qid": 0, 00:21:26.307 "state": "enabled", 00:21:26.307 "thread": "nvmf_tgt_poll_group_000", 00:21:26.307 "listen_address": { 00:21:26.307 "trtype": "TCP", 00:21:26.307 "adrfam": "IPv4", 00:21:26.307 "traddr": "10.0.0.2", 00:21:26.307 "trsvcid": "4420" 00:21:26.307 }, 00:21:26.307 "peer_address": { 00:21:26.307 "trtype": "TCP", 00:21:26.307 "adrfam": "IPv4", 00:21:26.307 "traddr": "10.0.0.1", 00:21:26.307 "trsvcid": "42848" 00:21:26.307 }, 00:21:26.307 "auth": { 00:21:26.307 "state": "completed", 00:21:26.307 "digest": "sha384", 00:21:26.307 "dhgroup": "ffdhe2048" 00:21:26.307 } 00:21:26.307 } 00:21:26.307 ]' 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.307 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.566 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.566 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.566 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.566 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.566 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.824 07:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.761 07:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.019 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.277 00:21:28.277 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.277 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.277 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.535 { 00:21:28.535 "cntlid": 61, 00:21:28.535 "qid": 0, 00:21:28.535 "state": "enabled", 00:21:28.535 "thread": "nvmf_tgt_poll_group_000", 00:21:28.535 "listen_address": { 00:21:28.535 "trtype": "TCP", 00:21:28.535 "adrfam": "IPv4", 00:21:28.535 "traddr": "10.0.0.2", 00:21:28.535 "trsvcid": "4420" 00:21:28.535 }, 00:21:28.535 "peer_address": { 00:21:28.535 "trtype": "TCP", 00:21:28.535 "adrfam": "IPv4", 00:21:28.535 "traddr": "10.0.0.1", 00:21:28.535 "trsvcid": "42872" 00:21:28.535 }, 00:21:28.535 "auth": { 00:21:28.535 "state": "completed", 00:21:28.535 "digest": "sha384", 00:21:28.535 "dhgroup": "ffdhe2048" 00:21:28.535 } 00:21:28.535 } 00:21:28.535 ]' 00:21:28.535 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.792 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.792 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.792 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.792 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.793 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.793 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.793 07:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.051 07:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:29.988 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.246 07:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.247 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.247 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.506 00:21:30.764 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.764 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.764 07:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.022 { 00:21:31.022 "cntlid": 63, 00:21:31.022 "qid": 0, 00:21:31.022 "state": "enabled", 00:21:31.022 "thread": "nvmf_tgt_poll_group_000", 00:21:31.022 "listen_address": { 00:21:31.022 "trtype": "TCP", 00:21:31.022 "adrfam": "IPv4", 00:21:31.022 "traddr": "10.0.0.2", 00:21:31.022 "trsvcid": "4420" 00:21:31.022 }, 00:21:31.022 "peer_address": { 00:21:31.022 "trtype": "TCP", 00:21:31.022 "adrfam": "IPv4", 00:21:31.022 "traddr": "10.0.0.1", 00:21:31.022 "trsvcid": "34490" 00:21:31.022 }, 00:21:31.022 "auth": { 00:21:31.022 "state": "completed", 00:21:31.022 "digest": "sha384", 00:21:31.022 "dhgroup": "ffdhe2048" 00:21:31.022 } 00:21:31.022 } 00:21:31.022 ]' 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.022 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.280 07:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.216 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.473 07:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.040 00:21:33.040 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.040 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.040 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.297 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.297 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.297 07:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.297 07:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.297 07:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.297 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.297 { 00:21:33.297 "cntlid": 65, 00:21:33.297 "qid": 0, 00:21:33.297 "state": "enabled", 00:21:33.297 "thread": "nvmf_tgt_poll_group_000", 00:21:33.297 "listen_address": { 00:21:33.297 "trtype": "TCP", 00:21:33.297 "adrfam": "IPv4", 00:21:33.297 "traddr": "10.0.0.2", 00:21:33.297 "trsvcid": "4420" 00:21:33.297 }, 00:21:33.297 "peer_address": { 00:21:33.297 "trtype": "TCP", 00:21:33.297 "adrfam": "IPv4", 00:21:33.297 "traddr": "10.0.0.1", 00:21:33.297 "trsvcid": "34514" 00:21:33.297 }, 00:21:33.297 "auth": { 00:21:33.297 "state": "completed", 00:21:33.298 "digest": "sha384", 00:21:33.298 "dhgroup": "ffdhe3072" 00:21:33.298 } 00:21:33.298 } 00:21:33.298 ]' 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.298 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.556 07:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:34.494 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.752 07:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.009 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.267 { 00:21:35.267 "cntlid": 67, 00:21:35.267 "qid": 0, 00:21:35.267 "state": "enabled", 00:21:35.267 "thread": "nvmf_tgt_poll_group_000", 00:21:35.267 "listen_address": { 00:21:35.267 "trtype": "TCP", 00:21:35.267 "adrfam": "IPv4", 00:21:35.267 "traddr": "10.0.0.2", 00:21:35.267 "trsvcid": "4420" 00:21:35.267 }, 00:21:35.267 "peer_address": { 00:21:35.267 "trtype": "TCP", 00:21:35.267 "adrfam": "IPv4", 00:21:35.267 "traddr": "10.0.0.1", 00:21:35.267 "trsvcid": "34538" 00:21:35.267 }, 00:21:35.267 "auth": { 00:21:35.267 "state": "completed", 00:21:35.267 "digest": "sha384", 00:21:35.267 "dhgroup": "ffdhe3072" 00:21:35.267 } 00:21:35.267 } 00:21:35.267 ]' 00:21:35.267 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.525 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.782 07:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:36.718 07:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.976 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.255 00:21:37.255 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.255 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.255 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.523 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.524 { 00:21:37.524 "cntlid": 69, 00:21:37.524 "qid": 0, 00:21:37.524 "state": "enabled", 00:21:37.524 "thread": "nvmf_tgt_poll_group_000", 00:21:37.524 "listen_address": { 00:21:37.524 "trtype": "TCP", 00:21:37.524 "adrfam": "IPv4", 00:21:37.524 "traddr": "10.0.0.2", 00:21:37.524 "trsvcid": "4420" 00:21:37.524 }, 00:21:37.524 "peer_address": { 00:21:37.524 "trtype": "TCP", 00:21:37.524 "adrfam": "IPv4", 00:21:37.524 "traddr": "10.0.0.1", 00:21:37.524 "trsvcid": "34570" 00:21:37.524 }, 00:21:37.524 "auth": { 00:21:37.524 "state": "completed", 00:21:37.524 "digest": "sha384", 00:21:37.524 "dhgroup": "ffdhe3072" 00:21:37.524 } 00:21:37.524 } 00:21:37.524 ]' 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.524 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.781 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.781 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.781 07:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.038 07:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:38.973 07:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.231 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.488 00:21:39.488 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.488 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.488 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.746 { 00:21:39.746 "cntlid": 71, 00:21:39.746 "qid": 0, 00:21:39.746 "state": "enabled", 00:21:39.746 "thread": "nvmf_tgt_poll_group_000", 00:21:39.746 "listen_address": { 00:21:39.746 "trtype": "TCP", 00:21:39.746 "adrfam": "IPv4", 00:21:39.746 "traddr": "10.0.0.2", 00:21:39.746 "trsvcid": "4420" 00:21:39.746 }, 00:21:39.746 "peer_address": { 00:21:39.746 "trtype": "TCP", 00:21:39.746 "adrfam": "IPv4", 00:21:39.746 "traddr": "10.0.0.1", 00:21:39.746 "trsvcid": "34586" 00:21:39.746 }, 00:21:39.746 "auth": { 00:21:39.746 "state": "completed", 00:21:39.746 "digest": "sha384", 00:21:39.746 "dhgroup": "ffdhe3072" 00:21:39.746 } 00:21:39.746 } 00:21:39.746 ]' 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.746 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.004 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.004 07:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.004 07:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.004 07:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.004 07:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.261 07:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:41.197 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.455 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.713 00:21:41.713 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.713 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.713 07:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.972 { 00:21:41.972 "cntlid": 73, 00:21:41.972 "qid": 0, 00:21:41.972 "state": "enabled", 00:21:41.972 "thread": "nvmf_tgt_poll_group_000", 00:21:41.972 "listen_address": { 00:21:41.972 "trtype": "TCP", 00:21:41.972 "adrfam": "IPv4", 00:21:41.972 "traddr": "10.0.0.2", 00:21:41.972 "trsvcid": "4420" 00:21:41.972 }, 00:21:41.972 "peer_address": { 00:21:41.972 "trtype": "TCP", 00:21:41.972 "adrfam": "IPv4", 00:21:41.972 "traddr": "10.0.0.1", 00:21:41.972 "trsvcid": "51928" 00:21:41.972 }, 00:21:41.972 "auth": { 00:21:41.972 "state": "completed", 00:21:41.972 "digest": "sha384", 00:21:41.972 "dhgroup": "ffdhe4096" 00:21:41.972 } 00:21:41.972 } 00:21:41.972 ]' 00:21:41.972 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.229 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.486 07:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:43.423 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.682 07:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.250 00:21:44.250 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.250 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.250 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.508 { 00:21:44.508 "cntlid": 75, 00:21:44.508 "qid": 0, 00:21:44.508 "state": "enabled", 00:21:44.508 "thread": "nvmf_tgt_poll_group_000", 00:21:44.508 "listen_address": { 00:21:44.508 "trtype": "TCP", 00:21:44.508 "adrfam": "IPv4", 00:21:44.508 "traddr": "10.0.0.2", 00:21:44.508 "trsvcid": "4420" 00:21:44.508 }, 00:21:44.508 "peer_address": { 00:21:44.508 "trtype": "TCP", 00:21:44.508 "adrfam": "IPv4", 00:21:44.508 "traddr": "10.0.0.1", 00:21:44.508 "trsvcid": "51962" 00:21:44.508 }, 00:21:44.508 "auth": { 00:21:44.508 "state": "completed", 00:21:44.508 "digest": "sha384", 00:21:44.508 "dhgroup": "ffdhe4096" 00:21:44.508 } 00:21:44.508 } 00:21:44.508 ]' 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.508 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.768 07:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.705 07:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.964 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.532 00:21:46.532 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.532 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.532 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.791 { 00:21:46.791 "cntlid": 77, 00:21:46.791 "qid": 0, 00:21:46.791 "state": "enabled", 00:21:46.791 "thread": "nvmf_tgt_poll_group_000", 00:21:46.791 "listen_address": { 00:21:46.791 "trtype": "TCP", 00:21:46.791 "adrfam": "IPv4", 00:21:46.791 "traddr": "10.0.0.2", 00:21:46.791 "trsvcid": "4420" 00:21:46.791 }, 00:21:46.791 "peer_address": { 00:21:46.791 "trtype": "TCP", 00:21:46.791 "adrfam": "IPv4", 00:21:46.791 "traddr": "10.0.0.1", 00:21:46.791 "trsvcid": "51982" 00:21:46.791 }, 00:21:46.791 "auth": { 00:21:46.791 "state": "completed", 00:21:46.791 "digest": "sha384", 00:21:46.791 "dhgroup": "ffdhe4096" 00:21:46.791 } 00:21:46.791 } 00:21:46.791 ]' 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.791 07:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.050 07:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:47.982 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.982 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.982 07:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.983 07:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.983 07:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.983 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.983 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:47.983 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.240 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.807 00:21:48.808 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.808 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.808 07:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.066 { 00:21:49.066 "cntlid": 79, 00:21:49.066 "qid": 0, 00:21:49.066 "state": "enabled", 00:21:49.066 "thread": "nvmf_tgt_poll_group_000", 00:21:49.066 "listen_address": { 00:21:49.066 "trtype": "TCP", 00:21:49.066 "adrfam": "IPv4", 00:21:49.066 "traddr": "10.0.0.2", 00:21:49.066 "trsvcid": "4420" 00:21:49.066 }, 00:21:49.066 "peer_address": { 00:21:49.066 "trtype": "TCP", 00:21:49.066 "adrfam": "IPv4", 00:21:49.066 "traddr": "10.0.0.1", 00:21:49.066 "trsvcid": "52010" 00:21:49.066 }, 00:21:49.066 "auth": { 00:21:49.066 "state": "completed", 00:21:49.066 "digest": "sha384", 00:21:49.066 "dhgroup": "ffdhe4096" 00:21:49.066 } 00:21:49.066 } 00:21:49.066 ]' 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.066 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.326 07:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:21:50.259 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.259 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.259 07:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.259 07:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.517 07:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.517 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.517 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.517 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:50.517 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.776 07:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.384 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.384 { 00:21:51.384 "cntlid": 81, 00:21:51.384 "qid": 0, 00:21:51.384 "state": "enabled", 00:21:51.384 "thread": "nvmf_tgt_poll_group_000", 00:21:51.384 "listen_address": { 00:21:51.384 "trtype": "TCP", 00:21:51.384 "adrfam": "IPv4", 00:21:51.384 "traddr": "10.0.0.2", 00:21:51.384 "trsvcid": "4420" 00:21:51.384 }, 00:21:51.384 "peer_address": { 00:21:51.384 "trtype": "TCP", 00:21:51.384 "adrfam": "IPv4", 00:21:51.384 "traddr": "10.0.0.1", 00:21:51.384 "trsvcid": "43826" 00:21:51.384 }, 00:21:51.384 "auth": { 00:21:51.384 "state": "completed", 00:21:51.384 "digest": "sha384", 00:21:51.384 "dhgroup": "ffdhe6144" 00:21:51.384 } 00:21:51.384 } 00:21:51.384 ]' 00:21:51.384 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.642 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.900 07:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:52.832 07:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.090 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.657 00:21:53.657 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.657 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.657 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.914 { 00:21:53.914 "cntlid": 83, 00:21:53.914 "qid": 0, 00:21:53.914 "state": "enabled", 00:21:53.914 "thread": "nvmf_tgt_poll_group_000", 00:21:53.914 "listen_address": { 00:21:53.914 "trtype": "TCP", 00:21:53.914 "adrfam": "IPv4", 00:21:53.914 "traddr": "10.0.0.2", 00:21:53.914 "trsvcid": "4420" 00:21:53.914 }, 00:21:53.914 "peer_address": { 00:21:53.914 "trtype": "TCP", 00:21:53.914 "adrfam": "IPv4", 00:21:53.914 "traddr": "10.0.0.1", 00:21:53.914 "trsvcid": "43852" 00:21:53.914 }, 00:21:53.914 "auth": { 00:21:53.914 "state": "completed", 00:21:53.914 "digest": "sha384", 00:21:53.914 "dhgroup": "ffdhe6144" 00:21:53.914 } 00:21:53.914 } 00:21:53.914 ]' 00:21:53.914 07:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.914 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.172 07:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:21:55.107 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.368 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.627 07:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.628 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.628 07:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.197 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.197 { 00:21:56.197 "cntlid": 85, 00:21:56.197 "qid": 0, 00:21:56.197 "state": "enabled", 00:21:56.197 "thread": "nvmf_tgt_poll_group_000", 00:21:56.197 "listen_address": { 00:21:56.197 "trtype": "TCP", 00:21:56.197 "adrfam": "IPv4", 00:21:56.197 "traddr": "10.0.0.2", 00:21:56.197 "trsvcid": "4420" 00:21:56.197 }, 00:21:56.197 "peer_address": { 00:21:56.197 "trtype": "TCP", 00:21:56.197 "adrfam": "IPv4", 00:21:56.197 "traddr": "10.0.0.1", 00:21:56.197 "trsvcid": "43872" 00:21:56.197 }, 00:21:56.197 "auth": { 00:21:56.197 "state": "completed", 00:21:56.197 "digest": "sha384", 00:21:56.197 "dhgroup": "ffdhe6144" 00:21:56.197 } 00:21:56.197 } 00:21:56.197 ]' 00:21:56.197 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.457 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.715 07:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:57.675 07:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.933 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.500 00:21:58.500 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.500 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.500 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.758 { 00:21:58.758 "cntlid": 87, 00:21:58.758 "qid": 0, 00:21:58.758 "state": "enabled", 00:21:58.758 "thread": "nvmf_tgt_poll_group_000", 00:21:58.758 "listen_address": { 00:21:58.758 "trtype": "TCP", 00:21:58.758 "adrfam": "IPv4", 00:21:58.758 "traddr": "10.0.0.2", 00:21:58.758 "trsvcid": "4420" 00:21:58.758 }, 00:21:58.758 "peer_address": { 00:21:58.758 "trtype": "TCP", 00:21:58.758 "adrfam": "IPv4", 00:21:58.758 "traddr": "10.0.0.1", 00:21:58.758 "trsvcid": "43892" 00:21:58.758 }, 00:21:58.758 "auth": { 00:21:58.758 "state": "completed", 00:21:58.758 "digest": "sha384", 00:21:58.758 "dhgroup": "ffdhe6144" 00:21:58.758 } 00:21:58.758 } 00:21:58.758 ]' 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.758 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.071 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.071 07:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.071 07:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.071 07:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.071 07:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.071 07:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:00.001 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.257 07:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.193 00:22:01.193 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.193 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.193 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.450 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.450 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.450 07:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.450 07:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 07:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.450 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.450 { 00:22:01.450 "cntlid": 89, 00:22:01.450 "qid": 0, 00:22:01.450 "state": "enabled", 00:22:01.450 "thread": "nvmf_tgt_poll_group_000", 00:22:01.450 "listen_address": { 00:22:01.451 "trtype": "TCP", 00:22:01.451 "adrfam": "IPv4", 00:22:01.451 "traddr": "10.0.0.2", 00:22:01.451 "trsvcid": "4420" 00:22:01.451 }, 00:22:01.451 "peer_address": { 00:22:01.451 "trtype": "TCP", 00:22:01.451 "adrfam": "IPv4", 00:22:01.451 "traddr": "10.0.0.1", 00:22:01.451 "trsvcid": "57666" 00:22:01.451 }, 00:22:01.451 "auth": { 00:22:01.451 "state": "completed", 00:22:01.451 "digest": "sha384", 00:22:01.451 "dhgroup": "ffdhe8192" 00:22:01.451 } 00:22:01.451 } 00:22:01.451 ]' 00:22:01.451 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.451 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.451 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.451 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.451 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.708 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.708 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.708 07:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.966 07:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.898 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.156 07:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.092 00:22:04.092 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.092 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.092 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.350 { 00:22:04.350 "cntlid": 91, 00:22:04.350 "qid": 0, 00:22:04.350 "state": "enabled", 00:22:04.350 "thread": "nvmf_tgt_poll_group_000", 00:22:04.350 "listen_address": { 00:22:04.350 "trtype": "TCP", 00:22:04.350 "adrfam": "IPv4", 00:22:04.350 "traddr": "10.0.0.2", 00:22:04.350 "trsvcid": "4420" 00:22:04.350 }, 00:22:04.350 "peer_address": { 00:22:04.350 "trtype": "TCP", 00:22:04.350 "adrfam": "IPv4", 00:22:04.350 "traddr": "10.0.0.1", 00:22:04.350 "trsvcid": "57702" 00:22:04.350 }, 00:22:04.350 "auth": { 00:22:04.350 "state": "completed", 00:22:04.350 "digest": "sha384", 00:22:04.350 "dhgroup": "ffdhe8192" 00:22:04.350 } 00:22:04.350 } 00:22:04.350 ]' 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.350 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.608 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.608 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.608 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.865 07:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:05.881 07:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.881 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.816 00:22:06.816 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.816 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.816 07:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.074 { 00:22:07.074 "cntlid": 93, 00:22:07.074 "qid": 0, 00:22:07.074 "state": "enabled", 00:22:07.074 "thread": "nvmf_tgt_poll_group_000", 00:22:07.074 "listen_address": { 00:22:07.074 "trtype": "TCP", 00:22:07.074 "adrfam": "IPv4", 00:22:07.074 "traddr": "10.0.0.2", 00:22:07.074 "trsvcid": "4420" 00:22:07.074 }, 00:22:07.074 "peer_address": { 00:22:07.074 "trtype": "TCP", 00:22:07.074 "adrfam": "IPv4", 00:22:07.074 "traddr": "10.0.0.1", 00:22:07.074 "trsvcid": "57734" 00:22:07.074 }, 00:22:07.074 "auth": { 00:22:07.074 "state": "completed", 00:22:07.074 "digest": "sha384", 00:22:07.074 "dhgroup": "ffdhe8192" 00:22:07.074 } 00:22:07.074 } 00:22:07.074 ]' 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.074 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.332 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.332 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.332 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.332 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.332 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.590 07:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:08.527 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.786 07:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.722 00:22:09.722 07:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.722 07:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.722 07:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.980 07:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.980 07:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.980 07:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.980 07:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.980 07:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.980 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.980 { 00:22:09.980 "cntlid": 95, 00:22:09.980 "qid": 0, 00:22:09.980 "state": "enabled", 00:22:09.980 "thread": "nvmf_tgt_poll_group_000", 00:22:09.980 "listen_address": { 00:22:09.980 "trtype": "TCP", 00:22:09.980 "adrfam": "IPv4", 00:22:09.980 "traddr": "10.0.0.2", 00:22:09.980 "trsvcid": "4420" 00:22:09.980 }, 00:22:09.981 "peer_address": { 00:22:09.981 "trtype": "TCP", 00:22:09.981 "adrfam": "IPv4", 00:22:09.981 "traddr": "10.0.0.1", 00:22:09.981 "trsvcid": "57772" 00:22:09.981 }, 00:22:09.981 "auth": { 00:22:09.981 "state": "completed", 00:22:09.981 "digest": "sha384", 00:22:09.981 "dhgroup": "ffdhe8192" 00:22:09.981 } 00:22:09.981 } 00:22:09.981 ]' 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.981 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.239 07:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:11.176 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.177 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.434 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.435 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.000 00:22:12.000 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.000 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.000 07:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.000 { 00:22:12.000 "cntlid": 97, 00:22:12.000 "qid": 0, 00:22:12.000 "state": "enabled", 00:22:12.000 "thread": "nvmf_tgt_poll_group_000", 00:22:12.000 "listen_address": { 00:22:12.000 "trtype": "TCP", 00:22:12.000 "adrfam": "IPv4", 00:22:12.000 "traddr": "10.0.0.2", 00:22:12.000 "trsvcid": "4420" 00:22:12.000 }, 00:22:12.000 "peer_address": { 00:22:12.000 "trtype": "TCP", 00:22:12.000 "adrfam": "IPv4", 00:22:12.000 "traddr": "10.0.0.1", 00:22:12.000 "trsvcid": "57700" 00:22:12.000 }, 00:22:12.000 "auth": { 00:22:12.000 "state": "completed", 00:22:12.000 "digest": "sha512", 00:22:12.000 "dhgroup": "null" 00:22:12.000 } 00:22:12.000 } 00:22:12.000 ]' 00:22:12.000 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.257 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.257 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.258 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:12.258 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.258 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.258 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.258 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.516 07:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:13.452 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.710 07:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.969 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.229 07:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.488 { 00:22:14.488 "cntlid": 99, 00:22:14.488 "qid": 0, 00:22:14.488 "state": "enabled", 00:22:14.488 "thread": "nvmf_tgt_poll_group_000", 00:22:14.488 "listen_address": { 00:22:14.488 "trtype": "TCP", 00:22:14.488 "adrfam": "IPv4", 00:22:14.488 "traddr": "10.0.0.2", 00:22:14.488 "trsvcid": "4420" 00:22:14.488 }, 00:22:14.488 "peer_address": { 00:22:14.488 "trtype": "TCP", 00:22:14.488 "adrfam": "IPv4", 00:22:14.488 "traddr": "10.0.0.1", 00:22:14.488 "trsvcid": "57746" 00:22:14.488 }, 00:22:14.488 "auth": { 00:22:14.488 "state": "completed", 00:22:14.488 "digest": "sha512", 00:22:14.488 "dhgroup": "null" 00:22:14.488 } 00:22:14.488 } 00:22:14.488 ]' 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.488 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.746 07:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:15.684 07:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.943 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.202 00:22:16.202 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.202 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.202 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.460 { 00:22:16.460 "cntlid": 101, 00:22:16.460 "qid": 0, 00:22:16.460 "state": "enabled", 00:22:16.460 "thread": "nvmf_tgt_poll_group_000", 00:22:16.460 "listen_address": { 00:22:16.460 "trtype": "TCP", 00:22:16.460 "adrfam": "IPv4", 00:22:16.460 "traddr": "10.0.0.2", 00:22:16.460 "trsvcid": "4420" 00:22:16.460 }, 00:22:16.460 "peer_address": { 00:22:16.460 "trtype": "TCP", 00:22:16.460 "adrfam": "IPv4", 00:22:16.460 "traddr": "10.0.0.1", 00:22:16.460 "trsvcid": "57768" 00:22:16.460 }, 00:22:16.460 "auth": { 00:22:16.460 "state": "completed", 00:22:16.460 "digest": "sha512", 00:22:16.460 "dhgroup": "null" 00:22:16.460 } 00:22:16.460 } 00:22:16.460 ]' 00:22:16.460 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.718 07:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.976 07:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:22:17.912 07:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:17.912 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.170 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.428 00:22:18.428 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.428 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.428 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.685 { 00:22:18.685 "cntlid": 103, 00:22:18.685 "qid": 0, 00:22:18.685 "state": "enabled", 00:22:18.685 "thread": "nvmf_tgt_poll_group_000", 00:22:18.685 "listen_address": { 00:22:18.685 "trtype": "TCP", 00:22:18.685 "adrfam": "IPv4", 00:22:18.685 "traddr": "10.0.0.2", 00:22:18.685 "trsvcid": "4420" 00:22:18.685 }, 00:22:18.685 "peer_address": { 00:22:18.685 "trtype": "TCP", 00:22:18.685 "adrfam": "IPv4", 00:22:18.685 "traddr": "10.0.0.1", 00:22:18.685 "trsvcid": "57794" 00:22:18.685 }, 00:22:18.685 "auth": { 00:22:18.685 "state": "completed", 00:22:18.685 "digest": "sha512", 00:22:18.685 "dhgroup": "null" 00:22:18.685 } 00:22:18.685 } 00:22:18.685 ]' 00:22:18.685 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.943 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.943 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.943 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:18.943 07:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.943 07:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.943 07:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.943 07:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.231 07:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.166 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.425 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.684 00:22:20.684 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.684 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.684 07:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.942 { 00:22:20.942 "cntlid": 105, 00:22:20.942 "qid": 0, 00:22:20.942 "state": "enabled", 00:22:20.942 "thread": "nvmf_tgt_poll_group_000", 00:22:20.942 "listen_address": { 00:22:20.942 "trtype": "TCP", 00:22:20.942 "adrfam": "IPv4", 00:22:20.942 "traddr": "10.0.0.2", 00:22:20.942 "trsvcid": "4420" 00:22:20.942 }, 00:22:20.942 "peer_address": { 00:22:20.942 "trtype": "TCP", 00:22:20.942 "adrfam": "IPv4", 00:22:20.942 "traddr": "10.0.0.1", 00:22:20.942 "trsvcid": "50460" 00:22:20.942 }, 00:22:20.942 "auth": { 00:22:20.942 "state": "completed", 00:22:20.942 "digest": "sha512", 00:22:20.942 "dhgroup": "ffdhe2048" 00:22:20.942 } 00:22:20.942 } 00:22:20.942 ]' 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.942 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.201 07:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.578 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.579 07:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.579 07:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.579 07:49:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.579 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.579 07:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.836 00:22:22.836 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.836 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.836 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.093 { 00:22:23.093 "cntlid": 107, 00:22:23.093 "qid": 0, 00:22:23.093 "state": "enabled", 00:22:23.093 "thread": "nvmf_tgt_poll_group_000", 00:22:23.093 "listen_address": { 00:22:23.093 "trtype": "TCP", 00:22:23.093 "adrfam": "IPv4", 00:22:23.093 "traddr": "10.0.0.2", 00:22:23.093 "trsvcid": "4420" 00:22:23.093 }, 00:22:23.093 "peer_address": { 00:22:23.093 "trtype": "TCP", 00:22:23.093 "adrfam": "IPv4", 00:22:23.093 "traddr": "10.0.0.1", 00:22:23.093 "trsvcid": "50476" 00:22:23.093 }, 00:22:23.093 "auth": { 00:22:23.093 "state": "completed", 00:22:23.093 "digest": "sha512", 00:22:23.093 "dhgroup": "ffdhe2048" 00:22:23.093 } 00:22:23.093 } 00:22:23.093 ]' 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.093 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.351 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.351 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.351 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.351 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.351 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.608 07:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.545 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.803 07:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.061 00:22:25.061 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.061 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.061 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.319 { 00:22:25.319 "cntlid": 109, 00:22:25.319 "qid": 0, 00:22:25.319 "state": "enabled", 00:22:25.319 "thread": "nvmf_tgt_poll_group_000", 00:22:25.319 "listen_address": { 00:22:25.319 "trtype": "TCP", 00:22:25.319 "adrfam": "IPv4", 00:22:25.319 "traddr": "10.0.0.2", 00:22:25.319 "trsvcid": "4420" 00:22:25.319 }, 00:22:25.319 "peer_address": { 00:22:25.319 "trtype": "TCP", 00:22:25.319 "adrfam": "IPv4", 00:22:25.319 "traddr": "10.0.0.1", 00:22:25.319 "trsvcid": "50512" 00:22:25.319 }, 00:22:25.319 "auth": { 00:22:25.319 "state": "completed", 00:22:25.319 "digest": "sha512", 00:22:25.319 "dhgroup": "ffdhe2048" 00:22:25.319 } 00:22:25.319 } 00:22:25.319 ]' 00:22:25.319 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.577 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.835 07:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.771 07:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.029 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.286 00:22:27.286 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.286 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.286 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.543 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.544 { 00:22:27.544 "cntlid": 111, 00:22:27.544 "qid": 0, 00:22:27.544 "state": "enabled", 00:22:27.544 "thread": "nvmf_tgt_poll_group_000", 00:22:27.544 "listen_address": { 00:22:27.544 "trtype": "TCP", 00:22:27.544 "adrfam": "IPv4", 00:22:27.544 "traddr": "10.0.0.2", 00:22:27.544 "trsvcid": "4420" 00:22:27.544 }, 00:22:27.544 "peer_address": { 00:22:27.544 "trtype": "TCP", 00:22:27.544 "adrfam": "IPv4", 00:22:27.544 "traddr": "10.0.0.1", 00:22:27.544 "trsvcid": "50540" 00:22:27.544 }, 00:22:27.544 "auth": { 00:22:27.544 "state": "completed", 00:22:27.544 "digest": "sha512", 00:22:27.544 "dhgroup": "ffdhe2048" 00:22:27.544 } 00:22:27.544 } 00:22:27.544 ]' 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.544 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.801 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.801 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.801 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.801 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.801 07:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.058 07:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.999 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.256 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.820 00:22:29.820 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.820 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.820 07:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.820 { 00:22:29.820 "cntlid": 113, 00:22:29.820 "qid": 0, 00:22:29.820 "state": "enabled", 00:22:29.820 "thread": "nvmf_tgt_poll_group_000", 00:22:29.820 "listen_address": { 00:22:29.820 "trtype": "TCP", 00:22:29.820 "adrfam": "IPv4", 00:22:29.820 "traddr": "10.0.0.2", 00:22:29.820 "trsvcid": "4420" 00:22:29.820 }, 00:22:29.820 "peer_address": { 00:22:29.820 "trtype": "TCP", 00:22:29.820 "adrfam": "IPv4", 00:22:29.820 "traddr": "10.0.0.1", 00:22:29.820 "trsvcid": "50574" 00:22:29.820 }, 00:22:29.820 "auth": { 00:22:29.820 "state": "completed", 00:22:29.820 "digest": "sha512", 00:22:29.820 "dhgroup": "ffdhe3072" 00:22:29.820 } 00:22:29.820 } 00:22:29.820 ]' 00:22:29.820 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.077 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.334 07:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.268 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.526 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.783 00:22:31.783 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.783 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.783 07:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.041 { 00:22:32.041 "cntlid": 115, 00:22:32.041 "qid": 0, 00:22:32.041 "state": "enabled", 00:22:32.041 "thread": "nvmf_tgt_poll_group_000", 00:22:32.041 "listen_address": { 00:22:32.041 "trtype": "TCP", 00:22:32.041 "adrfam": "IPv4", 00:22:32.041 "traddr": "10.0.0.2", 00:22:32.041 "trsvcid": "4420" 00:22:32.041 }, 00:22:32.041 "peer_address": { 00:22:32.041 "trtype": "TCP", 00:22:32.041 "adrfam": "IPv4", 00:22:32.041 "traddr": "10.0.0.1", 00:22:32.041 "trsvcid": "33614" 00:22:32.041 }, 00:22:32.041 "auth": { 00:22:32.041 "state": "completed", 00:22:32.041 "digest": "sha512", 00:22:32.041 "dhgroup": "ffdhe3072" 00:22:32.041 } 00:22:32.041 } 00:22:32.041 ]' 00:22:32.041 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.298 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.557 07:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.512 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.769 07:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.027 00:22:34.027 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.027 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.027 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.286 { 00:22:34.286 "cntlid": 117, 00:22:34.286 "qid": 0, 00:22:34.286 "state": "enabled", 00:22:34.286 "thread": "nvmf_tgt_poll_group_000", 00:22:34.286 "listen_address": { 00:22:34.286 "trtype": "TCP", 00:22:34.286 "adrfam": "IPv4", 00:22:34.286 "traddr": "10.0.0.2", 00:22:34.286 "trsvcid": "4420" 00:22:34.286 }, 00:22:34.286 "peer_address": { 00:22:34.286 "trtype": "TCP", 00:22:34.286 "adrfam": "IPv4", 00:22:34.286 "traddr": "10.0.0.1", 00:22:34.286 "trsvcid": "33632" 00:22:34.286 }, 00:22:34.286 "auth": { 00:22:34.286 "state": "completed", 00:22:34.286 "digest": "sha512", 00:22:34.286 "dhgroup": "ffdhe3072" 00:22:34.286 } 00:22:34.286 } 00:22:34.286 ]' 00:22:34.286 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.543 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.800 07:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:35.733 07:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.990 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.248 00:22:36.248 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.248 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.248 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.506 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.506 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.506 07:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.506 07:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.506 07:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.506 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.506 { 00:22:36.506 "cntlid": 119, 00:22:36.506 "qid": 0, 00:22:36.506 "state": "enabled", 00:22:36.506 "thread": "nvmf_tgt_poll_group_000", 00:22:36.506 "listen_address": { 00:22:36.506 "trtype": "TCP", 00:22:36.506 "adrfam": "IPv4", 00:22:36.506 "traddr": "10.0.0.2", 00:22:36.506 "trsvcid": "4420" 00:22:36.506 }, 00:22:36.506 "peer_address": { 00:22:36.506 "trtype": "TCP", 00:22:36.506 "adrfam": "IPv4", 00:22:36.506 "traddr": "10.0.0.1", 00:22:36.506 "trsvcid": "33648" 00:22:36.506 }, 00:22:36.506 "auth": { 00:22:36.506 "state": "completed", 00:22:36.506 "digest": "sha512", 00:22:36.506 "dhgroup": "ffdhe3072" 00:22:36.506 } 00:22:36.506 } 00:22:36.506 ]' 00:22:36.507 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.507 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.507 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.765 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.765 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.765 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.765 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.765 07:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.024 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:37.959 07:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.216 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.217 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.475 00:22:38.475 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.475 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.475 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.733 { 00:22:38.733 "cntlid": 121, 00:22:38.733 "qid": 0, 00:22:38.733 "state": "enabled", 00:22:38.733 "thread": "nvmf_tgt_poll_group_000", 00:22:38.733 "listen_address": { 00:22:38.733 "trtype": "TCP", 00:22:38.733 "adrfam": "IPv4", 00:22:38.733 "traddr": "10.0.0.2", 00:22:38.733 "trsvcid": "4420" 00:22:38.733 }, 00:22:38.733 "peer_address": { 00:22:38.733 "trtype": "TCP", 00:22:38.733 "adrfam": "IPv4", 00:22:38.733 "traddr": "10.0.0.1", 00:22:38.733 "trsvcid": "33680" 00:22:38.733 }, 00:22:38.733 "auth": { 00:22:38.733 "state": "completed", 00:22:38.733 "digest": "sha512", 00:22:38.733 "dhgroup": "ffdhe4096" 00:22:38.733 } 00:22:38.733 } 00:22:38.733 ]' 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.733 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.991 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.991 07:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.991 07:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.991 07:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.991 07:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.250 07:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.184 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.442 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.699 00:22:40.699 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.699 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.699 07:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.956 { 00:22:40.956 "cntlid": 123, 00:22:40.956 "qid": 0, 00:22:40.956 "state": "enabled", 00:22:40.956 "thread": "nvmf_tgt_poll_group_000", 00:22:40.956 "listen_address": { 00:22:40.956 "trtype": "TCP", 00:22:40.956 "adrfam": "IPv4", 00:22:40.956 "traddr": "10.0.0.2", 00:22:40.956 "trsvcid": "4420" 00:22:40.956 }, 00:22:40.956 "peer_address": { 00:22:40.956 "trtype": "TCP", 00:22:40.956 "adrfam": "IPv4", 00:22:40.956 "traddr": "10.0.0.1", 00:22:40.956 "trsvcid": "51558" 00:22:40.956 }, 00:22:40.956 "auth": { 00:22:40.956 "state": "completed", 00:22:40.956 "digest": "sha512", 00:22:40.956 "dhgroup": "ffdhe4096" 00:22:40.956 } 00:22:40.956 } 00:22:40.956 ]' 00:22:40.956 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.214 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.471 07:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.406 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.663 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.664 07:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.921 00:22:42.921 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.921 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.921 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.178 { 00:22:43.178 "cntlid": 125, 00:22:43.178 "qid": 0, 00:22:43.178 "state": "enabled", 00:22:43.178 "thread": "nvmf_tgt_poll_group_000", 00:22:43.178 "listen_address": { 00:22:43.178 "trtype": "TCP", 00:22:43.178 "adrfam": "IPv4", 00:22:43.178 "traddr": "10.0.0.2", 00:22:43.178 "trsvcid": "4420" 00:22:43.178 }, 00:22:43.178 "peer_address": { 00:22:43.178 "trtype": "TCP", 00:22:43.178 "adrfam": "IPv4", 00:22:43.178 "traddr": "10.0.0.1", 00:22:43.178 "trsvcid": "51574" 00:22:43.178 }, 00:22:43.178 "auth": { 00:22:43.178 "state": "completed", 00:22:43.178 "digest": "sha512", 00:22:43.178 "dhgroup": "ffdhe4096" 00:22:43.178 } 00:22:43.178 } 00:22:43.178 ]' 00:22:43.178 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.436 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.693 07:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.630 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.888 07:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:45.145 00:22:45.145 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.145 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.145 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.404 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.404 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.404 07:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.404 07:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.662 { 00:22:45.662 "cntlid": 127, 00:22:45.662 "qid": 0, 00:22:45.662 "state": "enabled", 00:22:45.662 "thread": "nvmf_tgt_poll_group_000", 00:22:45.662 "listen_address": { 00:22:45.662 "trtype": "TCP", 00:22:45.662 "adrfam": "IPv4", 00:22:45.662 "traddr": "10.0.0.2", 00:22:45.662 "trsvcid": "4420" 00:22:45.662 }, 00:22:45.662 "peer_address": { 00:22:45.662 "trtype": "TCP", 00:22:45.662 "adrfam": "IPv4", 00:22:45.662 "traddr": "10.0.0.1", 00:22:45.662 "trsvcid": "51604" 00:22:45.662 }, 00:22:45.662 "auth": { 00:22:45.662 "state": "completed", 00:22:45.662 "digest": "sha512", 00:22:45.662 "dhgroup": "ffdhe4096" 00:22:45.662 } 00:22:45.662 } 00:22:45.662 ]' 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.662 07:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.919 07:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:46.901 07:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.901 07:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.901 07:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.901 07:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.901 07:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.901 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.901 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.901 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:46.901 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.159 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.726 00:22:47.727 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.727 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.727 07:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.984 { 00:22:47.984 "cntlid": 129, 00:22:47.984 "qid": 0, 00:22:47.984 "state": "enabled", 00:22:47.984 "thread": "nvmf_tgt_poll_group_000", 00:22:47.984 "listen_address": { 00:22:47.984 "trtype": "TCP", 00:22:47.984 "adrfam": "IPv4", 00:22:47.984 "traddr": "10.0.0.2", 00:22:47.984 "trsvcid": "4420" 00:22:47.984 }, 00:22:47.984 "peer_address": { 00:22:47.984 "trtype": "TCP", 00:22:47.984 "adrfam": "IPv4", 00:22:47.984 "traddr": "10.0.0.1", 00:22:47.984 "trsvcid": "51628" 00:22:47.984 }, 00:22:47.984 "auth": { 00:22:47.984 "state": "completed", 00:22:47.984 "digest": "sha512", 00:22:47.984 "dhgroup": "ffdhe6144" 00:22:47.984 } 00:22:47.984 } 00:22:47.984 ]' 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.984 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.243 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.243 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.243 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.502 07:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.436 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.694 07:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.261 00:22:50.261 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.261 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.261 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.518 { 00:22:50.518 "cntlid": 131, 00:22:50.518 "qid": 0, 00:22:50.518 "state": "enabled", 00:22:50.518 "thread": "nvmf_tgt_poll_group_000", 00:22:50.518 "listen_address": { 00:22:50.518 "trtype": "TCP", 00:22:50.518 "adrfam": "IPv4", 00:22:50.518 "traddr": "10.0.0.2", 00:22:50.518 "trsvcid": "4420" 00:22:50.518 }, 00:22:50.518 "peer_address": { 00:22:50.518 "trtype": "TCP", 00:22:50.518 "adrfam": "IPv4", 00:22:50.518 "traddr": "10.0.0.1", 00:22:50.518 "trsvcid": "51646" 00:22:50.518 }, 00:22:50.518 "auth": { 00:22:50.518 "state": "completed", 00:22:50.518 "digest": "sha512", 00:22:50.518 "dhgroup": "ffdhe6144" 00:22:50.518 } 00:22:50.518 } 00:22:50.518 ]' 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.518 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.776 07:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.711 07:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.275 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.839 00:22:52.839 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.839 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.839 07:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.839 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.839 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.839 07:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.839 07:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.839 07:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.839 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.839 { 00:22:52.839 "cntlid": 133, 00:22:52.839 "qid": 0, 00:22:52.839 "state": "enabled", 00:22:52.839 "thread": "nvmf_tgt_poll_group_000", 00:22:52.839 "listen_address": { 00:22:52.839 "trtype": "TCP", 00:22:52.839 "adrfam": "IPv4", 00:22:52.839 "traddr": "10.0.0.2", 00:22:52.839 "trsvcid": "4420" 00:22:52.839 }, 00:22:52.839 "peer_address": { 00:22:52.840 "trtype": "TCP", 00:22:52.840 "adrfam": "IPv4", 00:22:52.840 "traddr": "10.0.0.1", 00:22:52.840 "trsvcid": "45010" 00:22:52.840 }, 00:22:52.840 "auth": { 00:22:52.840 "state": "completed", 00:22:52.840 "digest": "sha512", 00:22:52.840 "dhgroup": "ffdhe6144" 00:22:52.840 } 00:22:52.840 } 00:22:52.840 ]' 00:22:52.840 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.097 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.354 07:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:54.292 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.550 07:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.115 00:22:55.115 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.115 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.115 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.374 { 00:22:55.374 "cntlid": 135, 00:22:55.374 "qid": 0, 00:22:55.374 "state": "enabled", 00:22:55.374 "thread": "nvmf_tgt_poll_group_000", 00:22:55.374 "listen_address": { 00:22:55.374 "trtype": "TCP", 00:22:55.374 "adrfam": "IPv4", 00:22:55.374 "traddr": "10.0.0.2", 00:22:55.374 "trsvcid": "4420" 00:22:55.374 }, 00:22:55.374 "peer_address": { 00:22:55.374 "trtype": "TCP", 00:22:55.374 "adrfam": "IPv4", 00:22:55.374 "traddr": "10.0.0.1", 00:22:55.374 "trsvcid": "45040" 00:22:55.374 }, 00:22:55.374 "auth": { 00:22:55.374 "state": "completed", 00:22:55.374 "digest": "sha512", 00:22:55.374 "dhgroup": "ffdhe6144" 00:22:55.374 } 00:22:55.374 } 00:22:55.374 ]' 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.374 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.633 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.633 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.633 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.891 07:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.828 07:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.086 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.021 00:22:58.021 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.021 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.021 07:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.021 { 00:22:58.021 "cntlid": 137, 00:22:58.021 "qid": 0, 00:22:58.021 "state": "enabled", 00:22:58.021 "thread": "nvmf_tgt_poll_group_000", 00:22:58.021 "listen_address": { 00:22:58.021 "trtype": "TCP", 00:22:58.021 "adrfam": "IPv4", 00:22:58.021 "traddr": "10.0.0.2", 00:22:58.021 "trsvcid": "4420" 00:22:58.021 }, 00:22:58.021 "peer_address": { 00:22:58.021 "trtype": "TCP", 00:22:58.021 "adrfam": "IPv4", 00:22:58.021 "traddr": "10.0.0.1", 00:22:58.021 "trsvcid": "45062" 00:22:58.021 }, 00:22:58.021 "auth": { 00:22:58.021 "state": "completed", 00:22:58.021 "digest": "sha512", 00:22:58.021 "dhgroup": "ffdhe8192" 00:22:58.021 } 00:22:58.021 } 00:22:58.021 ]' 00:22:58.021 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.279 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.537 07:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.471 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.729 07:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.669 00:23:00.669 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.669 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.669 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.973 { 00:23:00.973 "cntlid": 139, 00:23:00.973 "qid": 0, 00:23:00.973 "state": "enabled", 00:23:00.973 "thread": "nvmf_tgt_poll_group_000", 00:23:00.973 "listen_address": { 00:23:00.973 "trtype": "TCP", 00:23:00.973 "adrfam": "IPv4", 00:23:00.973 "traddr": "10.0.0.2", 00:23:00.973 "trsvcid": "4420" 00:23:00.973 }, 00:23:00.973 "peer_address": { 00:23:00.973 "trtype": "TCP", 00:23:00.973 "adrfam": "IPv4", 00:23:00.973 "traddr": "10.0.0.1", 00:23:00.973 "trsvcid": "53014" 00:23:00.973 }, 00:23:00.973 "auth": { 00:23:00.973 "state": "completed", 00:23:00.973 "digest": "sha512", 00:23:00.973 "dhgroup": "ffdhe8192" 00:23:00.973 } 00:23:00.973 } 00:23:00.973 ]' 00:23:00.973 07:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.973 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.231 07:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTgyNzRiYmY4Y2QwNDk4YzZkNzEwOGYxM2Q0Zjc1NGZunwn2: --dhchap-ctrl-secret DHHC-1:02:NDIwZWRlZDIzYThkNDljNTdlMDgzZTBhYjQ5M2ZkZGM4MDQ5MzIyYzdjMzRiZDllJQdyEw==: 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:02.176 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.742 07:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.675 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.675 { 00:23:03.675 "cntlid": 141, 00:23:03.675 "qid": 0, 00:23:03.675 "state": "enabled", 00:23:03.675 "thread": "nvmf_tgt_poll_group_000", 00:23:03.675 "listen_address": { 00:23:03.675 "trtype": "TCP", 00:23:03.675 "adrfam": "IPv4", 00:23:03.675 "traddr": "10.0.0.2", 00:23:03.675 "trsvcid": "4420" 00:23:03.675 }, 00:23:03.675 "peer_address": { 00:23:03.675 "trtype": "TCP", 00:23:03.675 "adrfam": "IPv4", 00:23:03.675 "traddr": "10.0.0.1", 00:23:03.675 "trsvcid": "53042" 00:23:03.675 }, 00:23:03.675 "auth": { 00:23:03.675 "state": "completed", 00:23:03.675 "digest": "sha512", 00:23:03.675 "dhgroup": "ffdhe8192" 00:23:03.675 } 00:23:03.675 } 00:23:03.675 ]' 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.675 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:03.973 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.973 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:03.973 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.973 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.973 07:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.231 07:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ODNlNzFhNGQ2MjE1YTc0NmFiY2NmNGIxYTE3YmJjNzg1MTcxMjRjM2JlYTNkNWI2mCQ8oQ==: --dhchap-ctrl-secret DHHC-1:01:NmUwNWNlZDg1ZDBmMmQ4MDllZDljMmUwY2QwNWE5MGFzZ0cY: 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.162 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.420 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:05.420 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.420 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.420 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:05.420 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:05.421 07:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:06.358 00:23:06.358 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.358 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.358 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.617 { 00:23:06.617 "cntlid": 143, 00:23:06.617 "qid": 0, 00:23:06.617 "state": "enabled", 00:23:06.617 "thread": "nvmf_tgt_poll_group_000", 00:23:06.617 "listen_address": { 00:23:06.617 "trtype": "TCP", 00:23:06.617 "adrfam": "IPv4", 00:23:06.617 "traddr": "10.0.0.2", 00:23:06.617 "trsvcid": "4420" 00:23:06.617 }, 00:23:06.617 "peer_address": { 00:23:06.617 "trtype": "TCP", 00:23:06.617 "adrfam": "IPv4", 00:23:06.617 "traddr": "10.0.0.1", 00:23:06.617 "trsvcid": "53076" 00:23:06.617 }, 00:23:06.617 "auth": { 00:23:06.617 "state": "completed", 00:23:06.617 "digest": "sha512", 00:23:06.617 "dhgroup": "ffdhe8192" 00:23:06.617 } 00:23:06.617 } 00:23:06.617 ]' 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.617 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.876 07:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:07.813 07:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.072 07:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.010 00:23:09.010 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.010 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.010 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.268 { 00:23:09.268 "cntlid": 145, 00:23:09.268 "qid": 0, 00:23:09.268 "state": "enabled", 00:23:09.268 "thread": "nvmf_tgt_poll_group_000", 00:23:09.268 "listen_address": { 00:23:09.268 "trtype": "TCP", 00:23:09.268 "adrfam": "IPv4", 00:23:09.268 "traddr": "10.0.0.2", 00:23:09.268 "trsvcid": "4420" 00:23:09.268 }, 00:23:09.268 "peer_address": { 00:23:09.268 "trtype": "TCP", 00:23:09.268 "adrfam": "IPv4", 00:23:09.268 "traddr": "10.0.0.1", 00:23:09.268 "trsvcid": "53110" 00:23:09.268 }, 00:23:09.268 "auth": { 00:23:09.268 "state": "completed", 00:23:09.268 "digest": "sha512", 00:23:09.268 "dhgroup": "ffdhe8192" 00:23:09.268 } 00:23:09.268 } 00:23:09.268 ]' 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.268 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.526 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.527 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.527 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.527 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.527 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.785 07:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTc4ZDk3MzM3M2VlNjViNWNlZjMzODBjNzIyYzg3ZWYwZjViMTU3ODRlMzRlZmYwjJdeFw==: --dhchap-ctrl-secret DHHC-1:03:NjNlODg3YzdjMzRmMGNlMWM3MDE3OTNmZWVkOTYzZjZlNTQ5MjYwMjBlZDIzYjMzZmE2YTA4ZWEwOGFiYzg4OJtquLw=: 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:10.721 07:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:11.659 request: 00:23:11.659 { 00:23:11.659 "name": "nvme0", 00:23:11.659 "trtype": "tcp", 00:23:11.659 "traddr": "10.0.0.2", 00:23:11.659 "adrfam": "ipv4", 00:23:11.659 "trsvcid": "4420", 00:23:11.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:11.659 "prchk_reftag": false, 00:23:11.659 "prchk_guard": false, 00:23:11.659 "hdgst": false, 00:23:11.659 "ddgst": false, 00:23:11.659 "dhchap_key": "key2", 00:23:11.659 "method": "bdev_nvme_attach_controller", 00:23:11.659 "req_id": 1 00:23:11.659 } 00:23:11.659 Got JSON-RPC error response 00:23:11.659 response: 00:23:11.659 { 00:23:11.659 "code": -5, 00:23:11.659 "message": "Input/output error" 00:23:11.659 } 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:11.659 07:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:12.228 request: 00:23:12.228 { 00:23:12.228 "name": "nvme0", 00:23:12.228 "trtype": "tcp", 00:23:12.228 "traddr": "10.0.0.2", 00:23:12.228 "adrfam": "ipv4", 00:23:12.228 "trsvcid": "4420", 00:23:12.228 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:12.228 "prchk_reftag": false, 00:23:12.228 "prchk_guard": false, 00:23:12.228 "hdgst": false, 00:23:12.228 "ddgst": false, 00:23:12.228 "dhchap_key": "key1", 00:23:12.228 "dhchap_ctrlr_key": "ckey2", 00:23:12.228 "method": "bdev_nvme_attach_controller", 00:23:12.228 "req_id": 1 00:23:12.228 } 00:23:12.228 Got JSON-RPC error response 00:23:12.228 response: 00:23:12.228 { 00:23:12.228 "code": -5, 00:23:12.228 "message": "Input/output error" 00:23:12.228 } 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:12.485 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.486 07:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.418 request: 00:23:13.418 { 00:23:13.418 "name": "nvme0", 00:23:13.418 "trtype": "tcp", 00:23:13.418 "traddr": "10.0.0.2", 00:23:13.418 "adrfam": "ipv4", 00:23:13.418 "trsvcid": "4420", 00:23:13.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:13.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:13.418 "prchk_reftag": false, 00:23:13.418 "prchk_guard": false, 00:23:13.418 "hdgst": false, 00:23:13.418 "ddgst": false, 00:23:13.418 "dhchap_key": "key1", 00:23:13.418 "dhchap_ctrlr_key": "ckey1", 00:23:13.418 "method": "bdev_nvme_attach_controller", 00:23:13.418 "req_id": 1 00:23:13.418 } 00:23:13.418 Got JSON-RPC error response 00:23:13.418 response: 00:23:13.418 { 00:23:13.418 "code": -5, 00:23:13.418 "message": "Input/output error" 00:23:13.418 } 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1087705 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1087705 ']' 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1087705 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.418 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1087705 00:23:13.419 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.419 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.419 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1087705' 00:23:13.419 killing process with pid 1087705 00:23:13.419 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1087705 00:23:13.419 07:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1087705 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1110605 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1110605 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1110605 ']' 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.790 07:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1110605 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1110605 ']' 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.748 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.005 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.005 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:16.005 07:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:16.005 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.005 07:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.263 07:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.196 00:23:17.196 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.196 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.196 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.454 { 00:23:17.454 "cntlid": 1, 00:23:17.454 "qid": 0, 00:23:17.454 "state": "enabled", 00:23:17.454 "thread": "nvmf_tgt_poll_group_000", 00:23:17.454 "listen_address": { 00:23:17.454 "trtype": "TCP", 00:23:17.454 "adrfam": "IPv4", 00:23:17.454 "traddr": "10.0.0.2", 00:23:17.454 "trsvcid": "4420" 00:23:17.454 }, 00:23:17.454 "peer_address": { 00:23:17.454 "trtype": "TCP", 00:23:17.454 "adrfam": "IPv4", 00:23:17.454 "traddr": "10.0.0.1", 00:23:17.454 "trsvcid": "37356" 00:23:17.454 }, 00:23:17.454 "auth": { 00:23:17.454 "state": "completed", 00:23:17.454 "digest": "sha512", 00:23:17.454 "dhgroup": "ffdhe8192" 00:23:17.454 } 00:23:17.454 } 00:23:17.454 ]' 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.454 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.455 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.455 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:17.455 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.713 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.713 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.713 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.971 07:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjQ0YjYxMmQ5YTUyYTA4OTQwNjlhNTUzNjdlODRiYTM3YzQ2ZDJhYTVmMGQyNWIzZDRjMzFiZGEzYjA2NGYxMT9i11c=: 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:18.907 07:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.165 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.423 request: 00:23:19.423 { 00:23:19.423 "name": "nvme0", 00:23:19.423 "trtype": "tcp", 00:23:19.423 "traddr": "10.0.0.2", 00:23:19.423 "adrfam": "ipv4", 00:23:19.423 "trsvcid": "4420", 00:23:19.423 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:19.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:19.423 "prchk_reftag": false, 00:23:19.423 "prchk_guard": false, 00:23:19.423 "hdgst": false, 00:23:19.423 "ddgst": false, 00:23:19.423 "dhchap_key": "key3", 00:23:19.423 "method": "bdev_nvme_attach_controller", 00:23:19.423 "req_id": 1 00:23:19.424 } 00:23:19.424 Got JSON-RPC error response 00:23:19.424 response: 00:23:19.424 { 00:23:19.424 "code": -5, 00:23:19.424 "message": "Input/output error" 00:23:19.424 } 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:19.424 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.682 07:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.940 request: 00:23:19.940 { 00:23:19.940 "name": "nvme0", 00:23:19.940 "trtype": "tcp", 00:23:19.940 "traddr": "10.0.0.2", 00:23:19.940 "adrfam": "ipv4", 00:23:19.940 "trsvcid": "4420", 00:23:19.940 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:19.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:19.940 "prchk_reftag": false, 00:23:19.940 "prchk_guard": false, 00:23:19.940 "hdgst": false, 00:23:19.940 "ddgst": false, 00:23:19.940 "dhchap_key": "key3", 00:23:19.940 "method": "bdev_nvme_attach_controller", 00:23:19.940 "req_id": 1 00:23:19.940 } 00:23:19.940 Got JSON-RPC error response 00:23:19.940 response: 00:23:19.940 { 00:23:19.940 "code": -5, 00:23:19.940 "message": "Input/output error" 00:23:19.940 } 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:19.940 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:20.198 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:20.457 request: 00:23:20.457 { 00:23:20.457 "name": "nvme0", 00:23:20.457 "trtype": "tcp", 00:23:20.457 "traddr": "10.0.0.2", 00:23:20.457 "adrfam": "ipv4", 00:23:20.457 "trsvcid": "4420", 00:23:20.457 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:20.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:20.457 "prchk_reftag": false, 00:23:20.457 "prchk_guard": false, 00:23:20.457 "hdgst": false, 00:23:20.457 "ddgst": false, 00:23:20.457 "dhchap_key": "key0", 00:23:20.457 "dhchap_ctrlr_key": "key1", 00:23:20.457 "method": "bdev_nvme_attach_controller", 00:23:20.457 "req_id": 1 00:23:20.457 } 00:23:20.457 Got JSON-RPC error response 00:23:20.457 response: 00:23:20.457 { 00:23:20.457 "code": -5, 00:23:20.457 "message": "Input/output error" 00:23:20.457 } 00:23:20.457 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:20.457 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.457 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.457 07:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.457 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:20.457 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:20.716 00:23:20.716 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:20.716 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:20.716 07:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.974 07:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.974 07:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.974 07:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1087855 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1087855 ']' 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1087855 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1087855 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.232 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.490 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1087855' 00:23:21.490 killing process with pid 1087855 00:23:21.490 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1087855 00:23:21.490 07:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1087855 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.019 rmmod nvme_tcp 00:23:24.019 rmmod nvme_fabrics 00:23:24.019 rmmod nvme_keyring 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1110605 ']' 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1110605 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1110605 ']' 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1110605 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110605 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110605' 00:23:24.019 killing process with pid 1110605 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1110605 00:23:24.019 07:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1110605 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.954 07:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.493 07:50:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.493 07:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.T0E /tmp/spdk.key-sha256.e7H /tmp/spdk.key-sha384.iT1 /tmp/spdk.key-sha512.ITw /tmp/spdk.key-sha512.aC6 /tmp/spdk.key-sha384.5lG /tmp/spdk.key-sha256.HQW '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:27.493 00:23:27.493 real 3m15.834s 00:23:27.493 user 7m32.559s 00:23:27.493 sys 0m24.720s 00:23:27.493 07:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:27.493 07:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.493 ************************************ 00:23:27.493 END TEST nvmf_auth_target 00:23:27.493 ************************************ 00:23:27.493 07:50:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:27.493 07:50:18 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:27.493 07:50:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:27.493 07:50:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:27.493 07:50:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.493 07:50:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.493 ************************************ 00:23:27.493 START TEST nvmf_bdevio_no_huge 00:23:27.494 ************************************ 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:27.494 * Looking for test storage... 00:23:27.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.494 07:50:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:29.400 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:29.400 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:29.400 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:29.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:29.400 00:23:29.400 --- 10.0.0.2 ping statistics --- 00:23:29.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.400 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:29.400 00:23:29.400 --- 10.0.0.1 ping statistics --- 00:23:29.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.400 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1113770 00:23:29.400 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1113770 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1113770 ']' 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.401 07:50:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.401 [2024-07-15 07:50:20.475449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:29.401 [2024-07-15 07:50:20.475618] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:29.660 [2024-07-15 07:50:20.637709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.919 [2024-07-15 07:50:20.920160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.919 [2024-07-15 07:50:20.920226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.919 [2024-07-15 07:50:20.920254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.919 [2024-07-15 07:50:20.920276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.919 [2024-07-15 07:50:20.920298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.919 [2024-07-15 07:50:20.920514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:29.919 [2024-07-15 07:50:20.920619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:29.919 [2024-07-15 07:50:20.920716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.919 [2024-07-15 07:50:20.920733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.177 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.177 [2024-07-15 07:50:21.402537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.435 Malloc0 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:30.435 [2024-07-15 07:50:21.492046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.435 { 00:23:30.435 "params": { 00:23:30.435 "name": "Nvme$subsystem", 00:23:30.435 "trtype": "$TEST_TRANSPORT", 00:23:30.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.435 "adrfam": "ipv4", 00:23:30.435 "trsvcid": "$NVMF_PORT", 00:23:30.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.435 "hdgst": ${hdgst:-false}, 00:23:30.435 "ddgst": ${ddgst:-false} 00:23:30.435 }, 00:23:30.435 "method": "bdev_nvme_attach_controller" 00:23:30.435 } 00:23:30.435 EOF 00:23:30.435 )") 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:30.435 07:50:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.435 "params": { 00:23:30.435 "name": "Nvme1", 00:23:30.435 "trtype": "tcp", 00:23:30.435 "traddr": "10.0.0.2", 00:23:30.435 "adrfam": "ipv4", 00:23:30.435 "trsvcid": "4420", 00:23:30.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.435 "hdgst": false, 00:23:30.435 "ddgst": false 00:23:30.436 }, 00:23:30.436 "method": "bdev_nvme_attach_controller" 00:23:30.436 }' 00:23:30.436 [2024-07-15 07:50:21.573140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:30.436 [2024-07-15 07:50:21.573307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1113926 ] 00:23:30.694 [2024-07-15 07:50:21.720960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.953 [2024-07-15 07:50:21.977936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.953 [2024-07-15 07:50:21.977962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.953 [2024-07-15 07:50:21.977967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.553 I/O targets: 00:23:31.553 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:31.553 00:23:31.553 00:23:31.553 CUnit - A unit testing framework for C - Version 2.1-3 00:23:31.553 http://cunit.sourceforge.net/ 00:23:31.553 00:23:31.553 00:23:31.553 Suite: bdevio tests on: Nvme1n1 00:23:31.553 Test: blockdev write read block ...passed 00:23:31.553 Test: blockdev write zeroes read block ...passed 00:23:31.553 Test: blockdev write zeroes read no split ...passed 00:23:31.553 Test: blockdev write zeroes read split ...passed 00:23:31.553 Test: blockdev write zeroes read split partial ...passed 00:23:31.553 Test: blockdev reset ...[2024-07-15 07:50:22.777060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.553 [2024-07-15 07:50:22.777251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:23:31.811 [2024-07-15 07:50:22.796630] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:31.811 passed 00:23:31.811 Test: blockdev write read 8 blocks ...passed 00:23:31.811 Test: blockdev write read size > 128k ...passed 00:23:31.811 Test: blockdev write read invalid size ...passed 00:23:31.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:31.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:31.812 Test: blockdev write read max offset ...passed 00:23:31.812 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:31.812 Test: blockdev writev readv 8 blocks ...passed 00:23:32.071 Test: blockdev writev readv 30 x 1block ...passed 00:23:32.071 Test: blockdev writev readv block ...passed 00:23:32.071 Test: blockdev writev readv size > 128k ...passed 00:23:32.071 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:32.071 Test: blockdev comparev and writev ...[2024-07-15 07:50:23.096622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.096695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.096754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.096805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.097395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.097434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.097490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.097531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.098146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.098213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.098255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.098824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.098861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.098936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:32.071 [2024-07-15 07:50:23.098978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.071 passed 00:23:32.071 Test: blockdev nvme passthru rw ...passed 00:23:32.071 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:50:23.181400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:32.071 [2024-07-15 07:50:23.181461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.181778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:32.071 [2024-07-15 07:50:23.181814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.182128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:32.071 [2024-07-15 07:50:23.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.071 [2024-07-15 07:50:23.182494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:32.071 [2024-07-15 07:50:23.182529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.071 passed 00:23:32.071 Test: blockdev nvme admin passthru ...passed 00:23:32.071 Test: blockdev copy ...passed 00:23:32.071 00:23:32.071 Run Summary: Type Total Ran Passed Failed Inactive 00:23:32.071 suites 1 1 n/a 0 0 00:23:32.071 tests 23 23 23 0 0 00:23:32.071 asserts 152 152 152 0 n/a 00:23:32.071 00:23:32.071 Elapsed time = 1.416 seconds 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.054 07:50:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.054 rmmod nvme_tcp 00:23:33.054 rmmod nvme_fabrics 00:23:33.054 rmmod nvme_keyring 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1113770 ']' 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1113770 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1113770 ']' 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1113770 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113770 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113770' 00:23:33.054 killing process with pid 1113770 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1113770 00:23:33.054 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1113770 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.989 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.990 07:50:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.895 07:50:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.895 00:23:35.895 real 0m8.759s 00:23:35.895 user 0m20.545s 00:23:35.895 sys 0m2.746s 00:23:35.895 07:50:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.895 07:50:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.895 ************************************ 00:23:35.895 END TEST nvmf_bdevio_no_huge 00:23:35.895 ************************************ 00:23:35.895 07:50:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:35.895 07:50:27 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:35.895 07:50:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.895 07:50:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.895 07:50:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.895 ************************************ 00:23:35.895 START TEST nvmf_tls 00:23:35.895 ************************************ 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:35.895 * Looking for test storage... 00:23:35.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.895 07:50:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:38.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:38.421 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:38.421 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.421 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:38.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:38.422 00:23:38.422 --- 10.0.0.2 ping statistics --- 00:23:38.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.422 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:38.422 00:23:38.422 --- 10.0.0.1 ping statistics --- 00:23:38.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.422 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1116259 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1116259 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1116259 ']' 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.422 07:50:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.422 [2024-07-15 07:50:29.360278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:38.422 [2024-07-15 07:50:29.360429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.422 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.422 [2024-07-15 07:50:29.495818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.679 [2024-07-15 07:50:29.753688] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.679 [2024-07-15 07:50:29.753767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.679 [2024-07-15 07:50:29.753794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.679 [2024-07-15 07:50:29.753819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.679 [2024-07-15 07:50:29.753840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.679 [2024-07-15 07:50:29.753901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:39.245 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:39.502 true 00:23:39.502 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:39.502 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:39.758 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:39.758 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:39.758 07:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:40.015 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.015 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:40.272 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:40.272 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:40.272 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:40.529 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.529 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:40.786 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:40.786 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:40.786 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.786 07:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:41.043 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:41.043 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:41.043 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:41.300 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.300 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:41.558 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:41.558 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:41.558 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:41.816 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.816 07:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:42.073 07:50:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.aOWJHufoPx 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.2I5lmHAFT4 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aOWJHufoPx 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2I5lmHAFT4 00:23:42.331 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:42.588 07:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:43.153 07:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aOWJHufoPx 00:23:43.153 07:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aOWJHufoPx 00:23:43.153 07:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.411 [2024-07-15 07:50:34.401661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.411 07:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.669 07:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:43.927 [2024-07-15 07:50:34.943191] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.927 [2024-07-15 07:50:34.943552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.927 07:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.185 malloc0 00:23:44.185 07:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.443 07:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aOWJHufoPx 00:23:44.701 [2024-07-15 07:50:35.780102] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:44.701 07:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aOWJHufoPx 00:23:44.701 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.895 Initializing NVMe Controllers 00:23:56.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.895 Initialization complete. Launching workers. 00:23:56.895 ======================================================== 00:23:56.895 Latency(us) 00:23:56.895 Device Information : IOPS MiB/s Average min max 00:23:56.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5626.63 21.98 11379.65 2282.30 17640.42 00:23:56.895 ======================================================== 00:23:56.895 Total : 5626.63 21.98 11379.65 2282.30 17640.42 00:23:56.895 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aOWJHufoPx 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aOWJHufoPx' 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1118280 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1118280 /var/tmp/bdevperf.sock 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1118280 ']' 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.895 07:50:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.895 [2024-07-15 07:50:46.091826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:56.895 [2024-07-15 07:50:46.092002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118280 ] 00:23:56.895 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.895 [2024-07-15 07:50:46.215820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.895 [2024-07-15 07:50:46.436306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.895 07:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.895 07:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:56.895 07:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aOWJHufoPx 00:23:56.895 [2024-07-15 07:50:47.280102] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.895 [2024-07-15 07:50:47.280316] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:56.895 TLSTESTn1 00:23:56.895 07:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.895 Running I/O for 10 seconds... 00:24:06.855 00:24:06.855 Latency(us) 00:24:06.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.855 Verification LBA range: start 0x0 length 0x2000 00:24:06.855 TLSTESTn1 : 10.03 2468.45 9.64 0.00 0.00 51749.56 8349.77 61749.48 00:24:06.855 =================================================================================================================== 00:24:06.855 Total : 2468.45 9.64 0.00 0.00 51749.56 8349.77 61749.48 00:24:06.855 0 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1118280 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1118280 ']' 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1118280 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118280 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118280' 00:24:06.855 killing process with pid 1118280 00:24:06.855 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1118280 00:24:06.855 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.856 00:24:06.856 Latency(us) 00:24:06.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.856 =================================================================================================================== 00:24:06.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.856 [2024-07-15 07:50:57.573689] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:06.856 07:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1118280 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2I5lmHAFT4 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2I5lmHAFT4 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2I5lmHAFT4 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2I5lmHAFT4' 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1119728 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1119728 /var/tmp/bdevperf.sock 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1119728 ']' 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.422 07:50:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.422 [2024-07-15 07:50:58.589925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:07.422 [2024-07-15 07:50:58.590066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119728 ] 00:24:07.680 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.680 [2024-07-15 07:50:58.718506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.939 [2024-07-15 07:50:58.942658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.505 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.505 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:08.505 07:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2I5lmHAFT4 00:24:08.762 [2024-07-15 07:50:59.802071] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.762 [2024-07-15 07:50:59.802268] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:08.762 [2024-07-15 07:50:59.812568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:08.762 [2024-07-15 07:50:59.813268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:08.762 [2024-07-15 07:50:59.814227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:08.762 [2024-07-15 07:50:59.815234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.762 [2024-07-15 07:50:59.815268] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:08.762 [2024-07-15 07:50:59.815309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.762 request: 00:24:08.762 { 00:24:08.762 "name": "TLSTEST", 00:24:08.762 "trtype": "tcp", 00:24:08.762 "traddr": "10.0.0.2", 00:24:08.762 "adrfam": "ipv4", 00:24:08.762 "trsvcid": "4420", 00:24:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.762 "prchk_reftag": false, 00:24:08.762 "prchk_guard": false, 00:24:08.762 "hdgst": false, 00:24:08.762 "ddgst": false, 00:24:08.762 "psk": "/tmp/tmp.2I5lmHAFT4", 00:24:08.762 "method": "bdev_nvme_attach_controller", 00:24:08.762 "req_id": 1 00:24:08.762 } 00:24:08.762 Got JSON-RPC error response 00:24:08.762 response: 00:24:08.762 { 00:24:08.762 "code": -5, 00:24:08.762 "message": "Input/output error" 00:24:08.762 } 00:24:08.762 07:50:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1119728 00:24:08.762 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1119728 ']' 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1119728 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119728 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119728' 00:24:08.763 killing process with pid 1119728 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1119728 00:24:08.763 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.763 00:24:08.763 Latency(us) 00:24:08.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.763 =================================================================================================================== 00:24:08.763 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.763 07:50:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1119728 00:24:08.763 [2024-07-15 07:50:59.867043] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aOWJHufoPx 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aOWJHufoPx 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aOWJHufoPx 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aOWJHufoPx' 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1120056 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1120056 /var/tmp/bdevperf.sock 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1120056 ']' 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.698 07:51:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 [2024-07-15 07:51:00.886699] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:09.698 [2024-07-15 07:51:00.886853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120056 ] 00:24:09.956 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.956 [2024-07-15 07:51:01.014651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.215 [2024-07-15 07:51:01.238327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.781 07:51:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.781 07:51:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:10.781 07:51:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aOWJHufoPx 00:24:11.041 [2024-07-15 07:51:02.076455] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.041 [2024-07-15 07:51:02.076660] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:11.041 [2024-07-15 07:51:02.086425] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:11.041 [2024-07-15 07:51:02.086463] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:11.041 [2024-07-15 07:51:02.086601] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:11.041 [2024-07-15 07:51:02.086625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:11.041 [2024-07-15 07:51:02.087562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:11.041 [2024-07-15 07:51:02.088561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.041 [2024-07-15 07:51:02.088592] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:11.041 [2024-07-15 07:51:02.088632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.041 request: 00:24:11.041 { 00:24:11.041 "name": "TLSTEST", 00:24:11.041 "trtype": "tcp", 00:24:11.041 "traddr": "10.0.0.2", 00:24:11.041 "adrfam": "ipv4", 00:24:11.041 "trsvcid": "4420", 00:24:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:11.041 "prchk_reftag": false, 00:24:11.041 "prchk_guard": false, 00:24:11.041 "hdgst": false, 00:24:11.041 "ddgst": false, 00:24:11.041 "psk": "/tmp/tmp.aOWJHufoPx", 00:24:11.041 "method": "bdev_nvme_attach_controller", 00:24:11.041 "req_id": 1 00:24:11.041 } 00:24:11.041 Got JSON-RPC error response 00:24:11.041 response: 00:24:11.041 { 00:24:11.041 "code": -5, 00:24:11.041 "message": "Input/output error" 00:24:11.041 } 00:24:11.041 07:51:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1120056 00:24:11.041 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1120056 ']' 00:24:11.041 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1120056 00:24:11.041 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:11.041 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.041 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120056 00:24:11.042 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:11.042 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:11.042 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120056' 00:24:11.042 killing process with pid 1120056 00:24:11.042 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1120056 00:24:11.042 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.042 00:24:11.042 Latency(us) 00:24:11.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.042 =================================================================================================================== 00:24:11.042 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:11.042 [2024-07-15 07:51:02.138272] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.042 07:51:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1120056 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aOWJHufoPx 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aOWJHufoPx 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aOWJHufoPx 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:11.997 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aOWJHufoPx' 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1120393 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1120393 /var/tmp/bdevperf.sock 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1120393 ']' 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.998 07:51:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.998 [2024-07-15 07:51:03.142018] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:11.998 [2024-07-15 07:51:03.142157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120393 ] 00:24:11.998 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.256 [2024-07-15 07:51:03.270784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.514 [2024-07-15 07:51:03.494769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.079 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.079 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:13.079 07:51:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aOWJHufoPx 00:24:13.079 [2024-07-15 07:51:04.306503] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.079 [2024-07-15 07:51:04.306748] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:13.337 [2024-07-15 07:51:04.320180] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:13.337 [2024-07-15 07:51:04.320242] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:13.337 [2024-07-15 07:51:04.320309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:13.337 [2024-07-15 07:51:04.320651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:13.337 [2024-07-15 07:51:04.321444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:13.337 [2024-07-15 07:51:04.322437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:13.337 [2024-07-15 07:51:04.322490] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:13.337 [2024-07-15 07:51:04.322522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:13.337 request: 00:24:13.337 { 00:24:13.337 "name": "TLSTEST", 00:24:13.337 "trtype": "tcp", 00:24:13.337 "traddr": "10.0.0.2", 00:24:13.337 "adrfam": "ipv4", 00:24:13.337 "trsvcid": "4420", 00:24:13.337 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:13.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.337 "prchk_reftag": false, 00:24:13.337 "prchk_guard": false, 00:24:13.337 "hdgst": false, 00:24:13.337 "ddgst": false, 00:24:13.337 "psk": "/tmp/tmp.aOWJHufoPx", 00:24:13.337 "method": "bdev_nvme_attach_controller", 00:24:13.337 "req_id": 1 00:24:13.337 } 00:24:13.337 Got JSON-RPC error response 00:24:13.337 response: 00:24:13.337 { 00:24:13.337 "code": -5, 00:24:13.337 "message": "Input/output error" 00:24:13.337 } 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1120393 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1120393 ']' 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1120393 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120393 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120393' 00:24:13.337 killing process with pid 1120393 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1120393 00:24:13.337 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.337 00:24:13.337 Latency(us) 00:24:13.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.337 =================================================================================================================== 00:24:13.337 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:13.337 [2024-07-15 07:51:04.371419] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:13.337 07:51:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1120393 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1120655 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1120655 /var/tmp/bdevperf.sock 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1120655 ']' 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.274 07:51:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.274 [2024-07-15 07:51:05.403410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:14.274 [2024-07-15 07:51:05.403563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120655 ] 00:24:14.274 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.532 [2024-07-15 07:51:05.533579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.532 [2024-07-15 07:51:05.759472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:15.468 [2024-07-15 07:51:06.656260] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:15.468 [2024-07-15 07:51:06.658246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:24:15.468 [2024-07-15 07:51:06.659228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.468 [2024-07-15 07:51:06.659273] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:15.468 [2024-07-15 07:51:06.659315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.468 request: 00:24:15.468 { 00:24:15.468 "name": "TLSTEST", 00:24:15.468 "trtype": "tcp", 00:24:15.468 "traddr": "10.0.0.2", 00:24:15.468 "adrfam": "ipv4", 00:24:15.468 "trsvcid": "4420", 00:24:15.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.468 "prchk_reftag": false, 00:24:15.468 "prchk_guard": false, 00:24:15.468 "hdgst": false, 00:24:15.468 "ddgst": false, 00:24:15.468 "method": "bdev_nvme_attach_controller", 00:24:15.468 "req_id": 1 00:24:15.468 } 00:24:15.468 Got JSON-RPC error response 00:24:15.468 response: 00:24:15.468 { 00:24:15.468 "code": -5, 00:24:15.468 "message": "Input/output error" 00:24:15.468 } 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1120655 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1120655 ']' 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1120655 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.468 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120655 00:24:15.726 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:15.726 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:15.726 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120655' 00:24:15.726 killing process with pid 1120655 00:24:15.726 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1120655 00:24:15.726 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.726 00:24:15.726 Latency(us) 00:24:15.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.726 =================================================================================================================== 00:24:15.726 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.726 07:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1120655 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1116259 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1116259 ']' 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1116259 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1116259 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1116259' 00:24:16.660 killing process with pid 1116259 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1116259 00:24:16.660 [2024-07-15 07:51:07.692644] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:16.660 07:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1116259 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lx5ZwEtvWP 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lx5ZwEtvWP 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1121582 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1121582 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1121582 ']' 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.041 07:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.299 [2024-07-15 07:51:09.315792] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:18.299 [2024-07-15 07:51:09.315963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.299 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.299 [2024-07-15 07:51:09.451759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.558 [2024-07-15 07:51:09.706065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.558 [2024-07-15 07:51:09.706150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.558 [2024-07-15 07:51:09.706180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.558 [2024-07-15 07:51:09.706205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.558 [2024-07-15 07:51:09.706226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.558 [2024-07-15 07:51:09.706283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lx5ZwEtvWP 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lx5ZwEtvWP 00:24:19.128 07:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:19.386 [2024-07-15 07:51:10.529811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.386 07:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:19.644 07:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:19.902 [2024-07-15 07:51:11.123478] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.902 [2024-07-15 07:51:11.123821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.160 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:20.417 malloc0 00:24:20.417 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:20.675 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:24:20.933 [2024-07-15 07:51:11.959279] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lx5ZwEtvWP 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lx5ZwEtvWP' 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1121887 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1121887 /var/tmp/bdevperf.sock 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1121887 ']' 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.933 07:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.933 [2024-07-15 07:51:12.062063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:20.933 [2024-07-15 07:51:12.062213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121887 ] 00:24:20.933 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.192 [2024-07-15 07:51:12.187889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.192 [2024-07-15 07:51:12.407474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.759 07:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.759 07:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:21.759 07:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:24:22.325 [2024-07-15 07:51:13.257913] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.325 [2024-07-15 07:51:13.258138] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:22.325 TLSTESTn1 00:24:22.325 07:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:22.325 Running I/O for 10 seconds... 00:24:32.330 00:24:32.330 Latency(us) 00:24:32.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.330 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:32.330 Verification LBA range: start 0x0 length 0x2000 00:24:32.330 TLSTESTn1 : 10.03 2626.09 10.26 0.00 0.00 48635.67 7767.23 47768.46 00:24:32.330 =================================================================================================================== 00:24:32.330 Total : 2626.09 10.26 0.00 0.00 48635.67 7767.23 47768.46 00:24:32.330 0 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1121887 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1121887 ']' 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1121887 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.330 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1121887 00:24:32.588 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:32.588 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:32.588 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1121887' 00:24:32.588 killing process with pid 1121887 00:24:32.588 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1121887 00:24:32.588 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.588 00:24:32.588 Latency(us) 00:24:32.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.588 =================================================================================================================== 00:24:32.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.588 [2024-07-15 07:51:23.581251] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:32.588 07:51:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1121887 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lx5ZwEtvWP 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lx5ZwEtvWP 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lx5ZwEtvWP 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lx5ZwEtvWP 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lx5ZwEtvWP' 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1123337 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1123337 /var/tmp/bdevperf.sock 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123337 ']' 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.524 07:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.524 [2024-07-15 07:51:24.607677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:33.524 [2024-07-15 07:51:24.607811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123337 ] 00:24:33.524 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.524 [2024-07-15 07:51:24.743034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.782 [2024-07-15 07:51:24.965129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.351 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.351 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:34.351 07:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:24:34.609 [2024-07-15 07:51:25.800961] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.609 [2024-07-15 07:51:25.801049] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:34.609 [2024-07-15 07:51:25.801072] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lx5ZwEtvWP 00:24:34.609 request: 00:24:34.609 { 00:24:34.609 "name": "TLSTEST", 00:24:34.609 "trtype": "tcp", 00:24:34.609 "traddr": "10.0.0.2", 00:24:34.609 "adrfam": "ipv4", 00:24:34.609 "trsvcid": "4420", 00:24:34.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.609 "prchk_reftag": false, 00:24:34.609 "prchk_guard": false, 00:24:34.609 "hdgst": false, 00:24:34.609 "ddgst": false, 00:24:34.609 "psk": "/tmp/tmp.lx5ZwEtvWP", 00:24:34.609 "method": "bdev_nvme_attach_controller", 00:24:34.609 "req_id": 1 00:24:34.609 } 00:24:34.609 Got JSON-RPC error response 00:24:34.609 response: 00:24:34.609 { 00:24:34.609 "code": -1, 00:24:34.609 "message": "Operation not permitted" 00:24:34.609 } 00:24:34.610 07:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1123337 00:24:34.610 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123337 ']' 00:24:34.610 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123337 00:24:34.610 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:34.610 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.610 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123337 00:24:34.869 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:34.869 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:34.869 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123337' 00:24:34.869 killing process with pid 1123337 00:24:34.869 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123337 00:24:34.869 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.869 00:24:34.869 Latency(us) 00:24:34.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.869 =================================================================================================================== 00:24:34.869 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:34.869 07:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123337 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1121582 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1121582 ']' 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1121582 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1121582 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1121582' 00:24:35.808 killing process with pid 1121582 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1121582 00:24:35.808 [2024-07-15 07:51:26.836021] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:35.808 07:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1121582 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1123866 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1123866 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123866 ']' 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.189 07:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.189 [2024-07-15 07:51:28.384906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:37.189 [2024-07-15 07:51:28.385048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.446 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.446 [2024-07-15 07:51:28.515802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.702 [2024-07-15 07:51:28.763347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.702 [2024-07-15 07:51:28.763448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.702 [2024-07-15 07:51:28.763479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.702 [2024-07-15 07:51:28.763503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.702 [2024-07-15 07:51:28.763524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.702 [2024-07-15 07:51:28.763571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lx5ZwEtvWP 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lx5ZwEtvWP 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.lx5ZwEtvWP 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lx5ZwEtvWP 00:24:38.265 07:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.521 [2024-07-15 07:51:29.610331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.521 07:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:38.777 07:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:39.034 [2024-07-15 07:51:30.180003] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.035 [2024-07-15 07:51:30.180335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.035 07:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:39.292 malloc0 00:24:39.292 07:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:39.854 07:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:24:39.854 [2024-07-15 07:51:31.083939] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:39.854 [2024-07-15 07:51:31.084004] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:39.854 [2024-07-15 07:51:31.084049] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:40.112 request: 00:24:40.112 { 00:24:40.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.112 "host": "nqn.2016-06.io.spdk:host1", 00:24:40.112 "psk": "/tmp/tmp.lx5ZwEtvWP", 00:24:40.112 "method": "nvmf_subsystem_add_host", 00:24:40.112 "req_id": 1 00:24:40.112 } 00:24:40.112 Got JSON-RPC error response 00:24:40.112 response: 00:24:40.112 { 00:24:40.112 "code": -32603, 00:24:40.112 "message": "Internal error" 00:24:40.112 } 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1123866 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123866 ']' 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123866 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123866 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123866' 00:24:40.112 killing process with pid 1123866 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123866 00:24:40.112 07:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123866 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lx5ZwEtvWP 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1124305 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1124305 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1124305 ']' 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.485 07:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.485 [2024-07-15 07:51:32.647093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:41.485 [2024-07-15 07:51:32.647283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.743 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.743 [2024-07-15 07:51:32.790594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.002 [2024-07-15 07:51:33.044375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.002 [2024-07-15 07:51:33.044460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.002 [2024-07-15 07:51:33.044491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.002 [2024-07-15 07:51:33.044517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.002 [2024-07-15 07:51:33.044538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.002 [2024-07-15 07:51:33.044588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lx5ZwEtvWP 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lx5ZwEtvWP 00:24:42.566 07:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:42.823 [2024-07-15 07:51:33.833995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.823 07:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:43.080 07:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:43.337 [2024-07-15 07:51:34.311208] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.337 [2024-07-15 07:51:34.311525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.337 07:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:43.595 malloc0 00:24:43.595 07:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:43.852 07:51:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:24:44.110 [2024-07-15 07:51:35.094837] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1124712 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1124712 /var/tmp/bdevperf.sock 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1124712 ']' 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.110 07:51:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.110 [2024-07-15 07:51:35.192836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:44.110 [2024-07-15 07:51:35.192998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124712 ] 00:24:44.110 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.110 [2024-07-15 07:51:35.315221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.369 [2024-07-15 07:51:35.536708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.935 07:51:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.935 07:51:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:44.935 07:51:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:24:45.192 [2024-07-15 07:51:36.336998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.192 [2024-07-15 07:51:36.337221] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:45.192 TLSTESTn1 00:24:45.450 07:51:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:45.708 07:51:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:45.708 "subsystems": [ 00:24:45.708 { 00:24:45.708 "subsystem": "keyring", 00:24:45.708 "config": [] 00:24:45.708 }, 00:24:45.708 { 00:24:45.708 "subsystem": "iobuf", 00:24:45.708 "config": [ 00:24:45.708 { 00:24:45.708 "method": "iobuf_set_options", 00:24:45.708 "params": { 00:24:45.708 "small_pool_count": 8192, 00:24:45.708 "large_pool_count": 1024, 00:24:45.708 "small_bufsize": 8192, 00:24:45.708 "large_bufsize": 135168 00:24:45.708 } 00:24:45.708 } 00:24:45.708 ] 00:24:45.708 }, 00:24:45.708 { 00:24:45.708 "subsystem": "sock", 00:24:45.708 "config": [ 00:24:45.708 { 00:24:45.708 "method": "sock_set_default_impl", 00:24:45.708 "params": { 00:24:45.708 "impl_name": "posix" 00:24:45.708 } 00:24:45.708 }, 00:24:45.708 { 00:24:45.708 "method": "sock_impl_set_options", 00:24:45.708 "params": { 00:24:45.708 "impl_name": "ssl", 00:24:45.708 "recv_buf_size": 4096, 00:24:45.708 "send_buf_size": 4096, 00:24:45.708 "enable_recv_pipe": true, 00:24:45.708 "enable_quickack": false, 00:24:45.708 "enable_placement_id": 0, 00:24:45.708 "enable_zerocopy_send_server": true, 00:24:45.708 "enable_zerocopy_send_client": false, 00:24:45.708 "zerocopy_threshold": 0, 00:24:45.708 "tls_version": 0, 00:24:45.708 "enable_ktls": false 00:24:45.708 } 00:24:45.708 }, 00:24:45.708 { 00:24:45.708 "method": "sock_impl_set_options", 00:24:45.708 "params": { 00:24:45.709 "impl_name": "posix", 00:24:45.709 "recv_buf_size": 2097152, 00:24:45.709 "send_buf_size": 2097152, 00:24:45.709 "enable_recv_pipe": true, 00:24:45.709 "enable_quickack": false, 00:24:45.709 "enable_placement_id": 0, 00:24:45.709 "enable_zerocopy_send_server": true, 00:24:45.709 "enable_zerocopy_send_client": false, 00:24:45.709 "zerocopy_threshold": 0, 00:24:45.709 "tls_version": 0, 00:24:45.709 "enable_ktls": false 00:24:45.709 } 00:24:45.709 } 00:24:45.709 ] 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "subsystem": "vmd", 00:24:45.709 "config": [] 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "subsystem": "accel", 00:24:45.709 "config": [ 00:24:45.709 { 00:24:45.709 "method": "accel_set_options", 00:24:45.709 "params": { 00:24:45.709 "small_cache_size": 128, 00:24:45.709 "large_cache_size": 16, 00:24:45.709 "task_count": 2048, 00:24:45.709 "sequence_count": 2048, 00:24:45.709 "buf_count": 2048 00:24:45.709 } 00:24:45.709 } 00:24:45.709 ] 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "subsystem": "bdev", 00:24:45.709 "config": [ 00:24:45.709 { 00:24:45.709 "method": "bdev_set_options", 00:24:45.709 "params": { 00:24:45.709 "bdev_io_pool_size": 65535, 00:24:45.709 "bdev_io_cache_size": 256, 00:24:45.709 "bdev_auto_examine": true, 00:24:45.709 "iobuf_small_cache_size": 128, 00:24:45.709 "iobuf_large_cache_size": 16 00:24:45.709 } 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "method": "bdev_raid_set_options", 00:24:45.709 "params": { 00:24:45.709 "process_window_size_kb": 1024 00:24:45.709 } 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "method": "bdev_iscsi_set_options", 00:24:45.709 "params": { 00:24:45.709 "timeout_sec": 30 00:24:45.709 } 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "method": "bdev_nvme_set_options", 00:24:45.709 "params": { 00:24:45.709 "action_on_timeout": "none", 00:24:45.709 "timeout_us": 0, 00:24:45.709 "timeout_admin_us": 0, 00:24:45.709 "keep_alive_timeout_ms": 10000, 00:24:45.709 "arbitration_burst": 0, 00:24:45.709 "low_priority_weight": 0, 00:24:45.709 "medium_priority_weight": 0, 00:24:45.709 "high_priority_weight": 0, 00:24:45.709 "nvme_adminq_poll_period_us": 10000, 00:24:45.709 "nvme_ioq_poll_period_us": 0, 00:24:45.709 "io_queue_requests": 0, 00:24:45.709 "delay_cmd_submit": true, 00:24:45.709 "transport_retry_count": 4, 00:24:45.709 "bdev_retry_count": 3, 00:24:45.709 "transport_ack_timeout": 0, 00:24:45.709 "ctrlr_loss_timeout_sec": 0, 00:24:45.709 "reconnect_delay_sec": 0, 00:24:45.709 "fast_io_fail_timeout_sec": 0, 00:24:45.709 "disable_auto_failback": false, 00:24:45.709 "generate_uuids": false, 00:24:45.709 "transport_tos": 0, 00:24:45.709 "nvme_error_stat": false, 00:24:45.709 "rdma_srq_size": 0, 00:24:45.709 "io_path_stat": false, 00:24:45.709 "allow_accel_sequence": false, 00:24:45.709 "rdma_max_cq_size": 0, 00:24:45.709 "rdma_cm_event_timeout_ms": 0, 00:24:45.709 "dhchap_digests": [ 00:24:45.709 "sha256", 00:24:45.709 "sha384", 00:24:45.709 "sha512" 00:24:45.709 ], 00:24:45.709 "dhchap_dhgroups": [ 00:24:45.709 "null", 00:24:45.709 "ffdhe2048", 00:24:45.709 "ffdhe3072", 00:24:45.709 "ffdhe4096", 00:24:45.709 "ffdhe6144", 00:24:45.709 "ffdhe8192" 00:24:45.709 ] 00:24:45.709 } 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "method": "bdev_nvme_set_hotplug", 00:24:45.709 "params": { 00:24:45.709 "period_us": 100000, 00:24:45.709 "enable": false 00:24:45.709 } 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "method": "bdev_malloc_create", 00:24:45.709 "params": { 00:24:45.709 "name": "malloc0", 00:24:45.709 "num_blocks": 8192, 00:24:45.709 "block_size": 4096, 00:24:45.709 "physical_block_size": 4096, 00:24:45.709 "uuid": "47a35d45-f36c-4522-9014-6f12fe0938ea", 00:24:45.709 "optimal_io_boundary": 0 00:24:45.709 } 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "method": "bdev_wait_for_examine" 00:24:45.709 } 00:24:45.709 ] 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "subsystem": "nbd", 00:24:45.709 "config": [] 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "subsystem": "scheduler", 00:24:45.709 "config": [ 00:24:45.709 { 00:24:45.709 "method": "framework_set_scheduler", 00:24:45.709 "params": { 00:24:45.709 "name": "static" 00:24:45.709 } 00:24:45.709 } 00:24:45.709 ] 00:24:45.709 }, 00:24:45.709 { 00:24:45.709 "subsystem": "nvmf", 00:24:45.709 "config": [ 00:24:45.709 { 00:24:45.709 "method": "nvmf_set_config", 00:24:45.709 "params": { 00:24:45.709 "discovery_filter": "match_any", 00:24:45.709 "admin_cmd_passthru": { 00:24:45.709 "identify_ctrlr": false 00:24:45.709 } 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_set_max_subsystems", 00:24:45.710 "params": { 00:24:45.710 "max_subsystems": 1024 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_set_crdt", 00:24:45.710 "params": { 00:24:45.710 "crdt1": 0, 00:24:45.710 "crdt2": 0, 00:24:45.710 "crdt3": 0 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_create_transport", 00:24:45.710 "params": { 00:24:45.710 "trtype": "TCP", 00:24:45.710 "max_queue_depth": 128, 00:24:45.710 "max_io_qpairs_per_ctrlr": 127, 00:24:45.710 "in_capsule_data_size": 4096, 00:24:45.710 "max_io_size": 131072, 00:24:45.710 "io_unit_size": 131072, 00:24:45.710 "max_aq_depth": 128, 00:24:45.710 "num_shared_buffers": 511, 00:24:45.710 "buf_cache_size": 4294967295, 00:24:45.710 "dif_insert_or_strip": false, 00:24:45.710 "zcopy": false, 00:24:45.710 "c2h_success": false, 00:24:45.710 "sock_priority": 0, 00:24:45.710 "abort_timeout_sec": 1, 00:24:45.710 "ack_timeout": 0, 00:24:45.710 "data_wr_pool_size": 0 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_create_subsystem", 00:24:45.710 "params": { 00:24:45.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.710 "allow_any_host": false, 00:24:45.710 "serial_number": "SPDK00000000000001", 00:24:45.710 "model_number": "SPDK bdev Controller", 00:24:45.710 "max_namespaces": 10, 00:24:45.710 "min_cntlid": 1, 00:24:45.710 "max_cntlid": 65519, 00:24:45.710 "ana_reporting": false 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_subsystem_add_host", 00:24:45.710 "params": { 00:24:45.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.710 "host": "nqn.2016-06.io.spdk:host1", 00:24:45.710 "psk": "/tmp/tmp.lx5ZwEtvWP" 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_subsystem_add_ns", 00:24:45.710 "params": { 00:24:45.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.710 "namespace": { 00:24:45.710 "nsid": 1, 00:24:45.710 "bdev_name": "malloc0", 00:24:45.710 "nguid": "47A35D45F36C452290146F12FE0938EA", 00:24:45.710 "uuid": "47a35d45-f36c-4522-9014-6f12fe0938ea", 00:24:45.710 "no_auto_visible": false 00:24:45.710 } 00:24:45.710 } 00:24:45.710 }, 00:24:45.710 { 00:24:45.710 "method": "nvmf_subsystem_add_listener", 00:24:45.710 "params": { 00:24:45.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.710 "listen_address": { 00:24:45.710 "trtype": "TCP", 00:24:45.710 "adrfam": "IPv4", 00:24:45.710 "traddr": "10.0.0.2", 00:24:45.710 "trsvcid": "4420" 00:24:45.710 }, 00:24:45.710 "secure_channel": true 00:24:45.710 } 00:24:45.710 } 00:24:45.710 ] 00:24:45.710 } 00:24:45.710 ] 00:24:45.710 }' 00:24:45.710 07:51:36 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:45.969 "subsystems": [ 00:24:45.969 { 00:24:45.969 "subsystem": "keyring", 00:24:45.969 "config": [] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "iobuf", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "iobuf_set_options", 00:24:45.969 "params": { 00:24:45.969 "small_pool_count": 8192, 00:24:45.969 "large_pool_count": 1024, 00:24:45.969 "small_bufsize": 8192, 00:24:45.969 "large_bufsize": 135168 00:24:45.969 } 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "sock", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "sock_set_default_impl", 00:24:45.969 "params": { 00:24:45.969 "impl_name": "posix" 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "sock_impl_set_options", 00:24:45.969 "params": { 00:24:45.969 "impl_name": "ssl", 00:24:45.969 "recv_buf_size": 4096, 00:24:45.969 "send_buf_size": 4096, 00:24:45.969 "enable_recv_pipe": true, 00:24:45.969 "enable_quickack": false, 00:24:45.969 "enable_placement_id": 0, 00:24:45.969 "enable_zerocopy_send_server": true, 00:24:45.969 "enable_zerocopy_send_client": false, 00:24:45.969 "zerocopy_threshold": 0, 00:24:45.969 "tls_version": 0, 00:24:45.969 "enable_ktls": false 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "sock_impl_set_options", 00:24:45.969 "params": { 00:24:45.969 "impl_name": "posix", 00:24:45.969 "recv_buf_size": 2097152, 00:24:45.969 "send_buf_size": 2097152, 00:24:45.969 "enable_recv_pipe": true, 00:24:45.969 "enable_quickack": false, 00:24:45.969 "enable_placement_id": 0, 00:24:45.969 "enable_zerocopy_send_server": true, 00:24:45.969 "enable_zerocopy_send_client": false, 00:24:45.969 "zerocopy_threshold": 0, 00:24:45.969 "tls_version": 0, 00:24:45.969 "enable_ktls": false 00:24:45.969 } 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "vmd", 00:24:45.969 "config": [] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "accel", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "accel_set_options", 00:24:45.969 "params": { 00:24:45.969 "small_cache_size": 128, 00:24:45.969 "large_cache_size": 16, 00:24:45.969 "task_count": 2048, 00:24:45.969 "sequence_count": 2048, 00:24:45.969 "buf_count": 2048 00:24:45.969 } 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "bdev", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "bdev_set_options", 00:24:45.969 "params": { 00:24:45.969 "bdev_io_pool_size": 65535, 00:24:45.969 "bdev_io_cache_size": 256, 00:24:45.969 "bdev_auto_examine": true, 00:24:45.969 "iobuf_small_cache_size": 128, 00:24:45.969 "iobuf_large_cache_size": 16 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_raid_set_options", 00:24:45.969 "params": { 00:24:45.969 "process_window_size_kb": 1024 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_iscsi_set_options", 00:24:45.969 "params": { 00:24:45.969 "timeout_sec": 30 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_nvme_set_options", 00:24:45.969 "params": { 00:24:45.969 "action_on_timeout": "none", 00:24:45.969 "timeout_us": 0, 00:24:45.969 "timeout_admin_us": 0, 00:24:45.969 "keep_alive_timeout_ms": 10000, 00:24:45.969 "arbitration_burst": 0, 00:24:45.969 "low_priority_weight": 0, 00:24:45.969 "medium_priority_weight": 0, 00:24:45.969 "high_priority_weight": 0, 00:24:45.969 "nvme_adminq_poll_period_us": 10000, 00:24:45.969 "nvme_ioq_poll_period_us": 0, 00:24:45.969 "io_queue_requests": 512, 00:24:45.969 "delay_cmd_submit": true, 00:24:45.969 "transport_retry_count": 4, 00:24:45.969 "bdev_retry_count": 3, 00:24:45.969 "transport_ack_timeout": 0, 00:24:45.969 "ctrlr_loss_timeout_sec": 0, 00:24:45.969 "reconnect_delay_sec": 0, 00:24:45.969 "fast_io_fail_timeout_sec": 0, 00:24:45.969 "disable_auto_failback": false, 00:24:45.969 "generate_uuids": false, 00:24:45.969 "transport_tos": 0, 00:24:45.969 "nvme_error_stat": false, 00:24:45.969 "rdma_srq_size": 0, 00:24:45.969 "io_path_stat": false, 00:24:45.969 "allow_accel_sequence": false, 00:24:45.969 "rdma_max_cq_size": 0, 00:24:45.969 "rdma_cm_event_timeout_ms": 0, 00:24:45.969 "dhchap_digests": [ 00:24:45.969 "sha256", 00:24:45.969 "sha384", 00:24:45.969 "sha512" 00:24:45.969 ], 00:24:45.969 "dhchap_dhgroups": [ 00:24:45.969 "null", 00:24:45.969 "ffdhe2048", 00:24:45.969 "ffdhe3072", 00:24:45.969 "ffdhe4096", 00:24:45.969 "ffdhe6144", 00:24:45.969 "ffdhe8192" 00:24:45.969 ] 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_nvme_attach_controller", 00:24:45.969 "params": { 00:24:45.969 "name": "TLSTEST", 00:24:45.969 "trtype": "TCP", 00:24:45.969 "adrfam": "IPv4", 00:24:45.969 "traddr": "10.0.0.2", 00:24:45.969 "trsvcid": "4420", 00:24:45.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.969 "prchk_reftag": false, 00:24:45.969 "prchk_guard": false, 00:24:45.969 "ctrlr_loss_timeout_sec": 0, 00:24:45.969 "reconnect_delay_sec": 0, 00:24:45.969 "fast_io_fail_timeout_sec": 0, 00:24:45.969 "psk": "/tmp/tmp.lx5ZwEtvWP", 00:24:45.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.969 "hdgst": false, 00:24:45.969 "ddgst": false 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_nvme_set_hotplug", 00:24:45.969 "params": { 00:24:45.969 "period_us": 100000, 00:24:45.969 "enable": false 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_wait_for_examine" 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "nbd", 00:24:45.969 "config": [] 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }' 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1124712 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1124712 ']' 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1124712 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124712 00:24:45.969 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:45.970 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:45.970 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124712' 00:24:45.970 killing process with pid 1124712 00:24:45.970 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1124712 00:24:45.970 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.970 00:24:45.970 Latency(us) 00:24:45.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.970 =================================================================================================================== 00:24:45.970 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:45.970 [2024-07-15 07:51:37.085669] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:45.970 07:51:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1124712 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1124305 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1124305 ']' 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1124305 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124305 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124305' 00:24:46.928 killing process with pid 1124305 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1124305 00:24:46.928 [2024-07-15 07:51:38.075792] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:46.928 07:51:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1124305 00:24:48.307 07:51:39 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:48.307 07:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:48.307 07:51:39 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:48.307 "subsystems": [ 00:24:48.307 { 00:24:48.307 "subsystem": "keyring", 00:24:48.307 "config": [] 00:24:48.307 }, 00:24:48.307 { 00:24:48.307 "subsystem": "iobuf", 00:24:48.307 "config": [ 00:24:48.307 { 00:24:48.307 "method": "iobuf_set_options", 00:24:48.307 "params": { 00:24:48.307 "small_pool_count": 8192, 00:24:48.307 "large_pool_count": 1024, 00:24:48.307 "small_bufsize": 8192, 00:24:48.307 "large_bufsize": 135168 00:24:48.307 } 00:24:48.307 } 00:24:48.307 ] 00:24:48.307 }, 00:24:48.307 { 00:24:48.307 "subsystem": "sock", 00:24:48.307 "config": [ 00:24:48.307 { 00:24:48.307 "method": "sock_set_default_impl", 00:24:48.307 "params": { 00:24:48.307 "impl_name": "posix" 00:24:48.307 } 00:24:48.307 }, 00:24:48.307 { 00:24:48.307 "method": "sock_impl_set_options", 00:24:48.307 "params": { 00:24:48.307 "impl_name": "ssl", 00:24:48.307 "recv_buf_size": 4096, 00:24:48.307 "send_buf_size": 4096, 00:24:48.307 "enable_recv_pipe": true, 00:24:48.307 "enable_quickack": false, 00:24:48.307 "enable_placement_id": 0, 00:24:48.307 "enable_zerocopy_send_server": true, 00:24:48.307 "enable_zerocopy_send_client": false, 00:24:48.307 "zerocopy_threshold": 0, 00:24:48.307 "tls_version": 0, 00:24:48.307 "enable_ktls": false 00:24:48.307 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "sock_impl_set_options", 00:24:48.308 "params": { 00:24:48.308 "impl_name": "posix", 00:24:48.308 "recv_buf_size": 2097152, 00:24:48.308 "send_buf_size": 2097152, 00:24:48.308 "enable_recv_pipe": true, 00:24:48.308 "enable_quickack": false, 00:24:48.308 "enable_placement_id": 0, 00:24:48.308 "enable_zerocopy_send_server": true, 00:24:48.308 "enable_zerocopy_send_client": false, 00:24:48.308 "zerocopy_threshold": 0, 00:24:48.308 "tls_version": 0, 00:24:48.308 "enable_ktls": false 00:24:48.308 } 00:24:48.308 } 00:24:48.308 ] 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "subsystem": "vmd", 00:24:48.308 "config": [] 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "subsystem": "accel", 00:24:48.308 "config": [ 00:24:48.308 { 00:24:48.308 "method": "accel_set_options", 00:24:48.308 "params": { 00:24:48.308 "small_cache_size": 128, 00:24:48.308 "large_cache_size": 16, 00:24:48.308 "task_count": 2048, 00:24:48.308 "sequence_count": 2048, 00:24:48.308 "buf_count": 2048 00:24:48.308 } 00:24:48.308 } 00:24:48.308 ] 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "subsystem": "bdev", 00:24:48.308 "config": [ 00:24:48.308 { 00:24:48.308 "method": "bdev_set_options", 00:24:48.308 "params": { 00:24:48.308 "bdev_io_pool_size": 65535, 00:24:48.308 "bdev_io_cache_size": 256, 00:24:48.308 "bdev_auto_examine": true, 00:24:48.308 "iobuf_small_cache_size": 128, 00:24:48.308 "iobuf_large_cache_size": 16 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "bdev_raid_set_options", 00:24:48.308 "params": { 00:24:48.308 "process_window_size_kb": 1024 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "bdev_iscsi_set_options", 00:24:48.308 "params": { 00:24:48.308 "timeout_sec": 30 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "bdev_nvme_set_options", 00:24:48.308 "params": { 00:24:48.308 "action_on_timeout": "none", 00:24:48.308 "timeout_us": 0, 00:24:48.308 "timeout_admin_us": 0, 00:24:48.308 "keep_alive_timeout_ms": 10000, 00:24:48.308 "arbitration_burst": 0, 00:24:48.308 "low_priority_weight": 0, 00:24:48.308 "medium_priority_weight": 0, 00:24:48.308 "high_priority_weight": 0, 00:24:48.308 "nvme_adminq_poll_period_us": 10000, 00:24:48.308 "nvme_ioq_poll_period_us": 0, 00:24:48.308 "io_queue_requests": 0, 00:24:48.308 "delay_cmd_submit": true, 00:24:48.308 "transport_retry_count": 4, 00:24:48.308 "bdev_retry_count": 3, 00:24:48.308 "transport_ack_timeout": 0, 00:24:48.308 "ctrlr_loss_timeout_sec": 0, 00:24:48.308 "reconnect_delay_sec": 0, 00:24:48.308 "fast_io_fail_timeout_sec": 0, 00:24:48.308 "disable_auto_failback": false, 00:24:48.308 "generate_uuids": false, 00:24:48.308 "transport_tos": 0, 00:24:48.308 "nvme_error_stat": false, 00:24:48.308 "rdma_srq_size": 0, 00:24:48.308 "io_path_stat": false, 00:24:48.308 "allow_accel_sequence": false, 00:24:48.308 "rdma_max_cq_size": 0, 00:24:48.308 "rdma_cm_event_timeout_ms": 0, 00:24:48.308 "dhchap_digests": [ 00:24:48.308 "sha256", 00:24:48.308 "sha384", 00:24:48.308 "sha512" 00:24:48.308 ], 00:24:48.308 "dhchap_dhgroups": [ 00:24:48.308 "null", 00:24:48.308 "ffdhe2048", 00:24:48.308 "ffdhe3072", 00:24:48.308 "ffdhe4096", 00:24:48.308 "ffdhe6144", 00:24:48.308 "ffdhe8192" 00:24:48.308 ] 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "bdev_nvme_set_hotplug", 00:24:48.308 "params": { 00:24:48.308 "period_us": 100000, 00:24:48.308 "enable": false 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "bdev_malloc_create", 00:24:48.308 "params": { 00:24:48.308 "name": "malloc0", 00:24:48.308 "num_blocks": 8192, 00:24:48.308 "block_size": 4096, 00:24:48.308 "physical_block_size": 4096, 00:24:48.308 "uuid": "47a35d45-f36c-4522-9014-6f12fe0938ea", 00:24:48.308 "optimal_io_boundary": 0 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "bdev_wait_for_examine" 00:24:48.308 } 00:24:48.308 ] 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "subsystem": "nbd", 00:24:48.308 "config": [] 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "subsystem": "scheduler", 00:24:48.308 "config": [ 00:24:48.308 { 00:24:48.308 "method": "framework_set_scheduler", 00:24:48.308 "params": { 00:24:48.308 "name": "static" 00:24:48.308 } 00:24:48.308 } 00:24:48.308 ] 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "subsystem": "nvmf", 00:24:48.308 "config": [ 00:24:48.308 { 00:24:48.308 "method": "nvmf_set_config", 00:24:48.308 "params": { 00:24:48.308 "discovery_filter": "match_any", 00:24:48.308 "admin_cmd_passthru": { 00:24:48.308 "identify_ctrlr": false 00:24:48.308 } 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_set_max_subsystems", 00:24:48.308 "params": { 00:24:48.308 "max_subsystems": 1024 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_set_crdt", 00:24:48.308 "params": { 00:24:48.308 "crdt1": 0, 00:24:48.308 "crdt2": 0, 00:24:48.308 "crdt3": 0 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_create_transport", 00:24:48.308 "params": { 00:24:48.308 "trtype": "TCP", 00:24:48.308 "max_queue_depth": 128, 00:24:48.308 "max_io_qpairs_per_ctrlr": 127, 00:24:48.308 "in_capsule_data_size": 4096, 00:24:48.308 "max_io_size": 131072, 00:24:48.308 "io_unit_size": 131072, 00:24:48.308 "max_aq_depth": 128, 00:24:48.308 "num_shared_buffers": 511, 00:24:48.308 "buf_cache_size": 4294967295, 00:24:48.308 "dif_insert_or_strip": false, 00:24:48.308 "zcopy": false, 00:24:48.308 "c2h_success": false, 00:24:48.308 "sock_priority": 0, 00:24:48.308 "abort_timeout_sec": 1, 00:24:48.308 "ack_timeout": 0, 00:24:48.308 "data_wr_pool_size": 0 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_create_subsystem", 00:24:48.308 "params": { 00:24:48.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.308 "allow_any_host": false, 00:24:48.308 "serial_number": "SPDK00000000000001", 00:24:48.308 "model_number": "SPDK bdev Controller", 00:24:48.308 "max_namespaces": 10, 00:24:48.308 "min_cntlid": 1, 00:24:48.308 "max_cntlid": 65519, 00:24:48.308 "ana_reporting": false 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_subsystem_add_host", 00:24:48.308 "params": { 00:24:48.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.308 "host": "nqn.2016-06.io.spdk:host1", 00:24:48.308 "psk": "/tmp/tmp.lx5ZwEtvWP" 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_subsystem_add_ns", 00:24:48.308 "params": { 00:24:48.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.308 "namespace": { 00:24:48.308 "nsid": 1, 00:24:48.308 "bdev_name": "malloc0", 00:24:48.308 "nguid": "47A35D45F36C452290146F12FE0938EA", 00:24:48.308 "uuid": "47a35d45-f36c-4522-9014-6f12fe0938ea", 00:24:48.308 "no_auto_visible": false 00:24:48.308 } 00:24:48.308 } 00:24:48.308 }, 00:24:48.308 { 00:24:48.308 "method": "nvmf_subsystem_add_listener", 00:24:48.308 "params": { 00:24:48.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.308 "listen_address": { 00:24:48.308 "trtype": "TCP", 00:24:48.308 "adrfam": "IPv4", 00:24:48.308 "traddr": "10.0.0.2", 00:24:48.308 "trsvcid": "4420" 00:24:48.308 }, 00:24:48.308 "secure_channel": true 00:24:48.308 } 00:24:48.308 } 00:24:48.308 ] 00:24:48.308 } 00:24:48.308 ] 00:24:48.308 }' 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1125259 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1125259 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1125259 ']' 00:24:48.308 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.309 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.309 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.309 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.309 07:51:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.567 [2024-07-15 07:51:39.600246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:48.567 [2024-07-15 07:51:39.600374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.567 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.567 [2024-07-15 07:51:39.737802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.825 [2024-07-15 07:51:39.990891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.825 [2024-07-15 07:51:39.990972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.825 [2024-07-15 07:51:39.991002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.825 [2024-07-15 07:51:39.991027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.825 [2024-07-15 07:51:39.991048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.825 [2024-07-15 07:51:39.991204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.391 [2024-07-15 07:51:40.525812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.391 [2024-07-15 07:51:40.541741] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:49.391 [2024-07-15 07:51:40.557763] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:49.391 [2024-07-15 07:51:40.558105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1125400 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1125400 /var/tmp/bdevperf.sock 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1125400 ']' 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.391 07:51:40 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:49.391 "subsystems": [ 00:24:49.391 { 00:24:49.391 "subsystem": "keyring", 00:24:49.391 "config": [] 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "subsystem": "iobuf", 00:24:49.391 "config": [ 00:24:49.391 { 00:24:49.391 "method": "iobuf_set_options", 00:24:49.391 "params": { 00:24:49.391 "small_pool_count": 8192, 00:24:49.391 "large_pool_count": 1024, 00:24:49.391 "small_bufsize": 8192, 00:24:49.391 "large_bufsize": 135168 00:24:49.391 } 00:24:49.391 } 00:24:49.391 ] 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "subsystem": "sock", 00:24:49.391 "config": [ 00:24:49.391 { 00:24:49.391 "method": "sock_set_default_impl", 00:24:49.391 "params": { 00:24:49.391 "impl_name": "posix" 00:24:49.391 } 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "method": "sock_impl_set_options", 00:24:49.391 "params": { 00:24:49.391 "impl_name": "ssl", 00:24:49.391 "recv_buf_size": 4096, 00:24:49.391 "send_buf_size": 4096, 00:24:49.391 "enable_recv_pipe": true, 00:24:49.391 "enable_quickack": false, 00:24:49.391 "enable_placement_id": 0, 00:24:49.391 "enable_zerocopy_send_server": true, 00:24:49.391 "enable_zerocopy_send_client": false, 00:24:49.391 "zerocopy_threshold": 0, 00:24:49.391 "tls_version": 0, 00:24:49.391 "enable_ktls": false 00:24:49.391 } 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "method": "sock_impl_set_options", 00:24:49.391 "params": { 00:24:49.391 "impl_name": "posix", 00:24:49.391 "recv_buf_size": 2097152, 00:24:49.391 "send_buf_size": 2097152, 00:24:49.391 "enable_recv_pipe": true, 00:24:49.391 "enable_quickack": false, 00:24:49.391 "enable_placement_id": 0, 00:24:49.391 "enable_zerocopy_send_server": true, 00:24:49.391 "enable_zerocopy_send_client": false, 00:24:49.391 "zerocopy_threshold": 0, 00:24:49.391 "tls_version": 0, 00:24:49.391 "enable_ktls": false 00:24:49.391 } 00:24:49.391 } 00:24:49.391 ] 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "subsystem": "vmd", 00:24:49.391 "config": [] 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "subsystem": "accel", 00:24:49.391 "config": [ 00:24:49.391 { 00:24:49.391 "method": "accel_set_options", 00:24:49.391 "params": { 00:24:49.391 "small_cache_size": 128, 00:24:49.391 "large_cache_size": 16, 00:24:49.391 "task_count": 2048, 00:24:49.391 "sequence_count": 2048, 00:24:49.391 "buf_count": 2048 00:24:49.391 } 00:24:49.391 } 00:24:49.391 ] 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "subsystem": "bdev", 00:24:49.391 "config": [ 00:24:49.391 { 00:24:49.391 "method": "bdev_set_options", 00:24:49.391 "params": { 00:24:49.391 "bdev_io_pool_size": 65535, 00:24:49.391 "bdev_io_cache_size": 256, 00:24:49.391 "bdev_auto_examine": true, 00:24:49.391 "iobuf_small_cache_size": 128, 00:24:49.391 "iobuf_large_cache_size": 16 00:24:49.391 } 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "method": "bdev_raid_set_options", 00:24:49.391 "params": { 00:24:49.391 "process_window_size_kb": 1024 00:24:49.391 } 00:24:49.391 }, 00:24:49.391 { 00:24:49.391 "method": "bdev_iscsi_set_options", 00:24:49.391 "params": { 00:24:49.392 "timeout_sec": 30 00:24:49.392 } 00:24:49.392 }, 00:24:49.392 { 00:24:49.392 "method": "bdev_nvme_set_options", 00:24:49.392 "params": { 00:24:49.392 "action_on_timeout": "none", 00:24:49.392 "timeout_us": 0, 00:24:49.392 "timeout_admin_us": 0, 00:24:49.392 "keep_alive_timeout_ms": 10000, 00:24:49.392 "arbitration_burst": 0, 00:24:49.392 "low_priority_weight": 0, 00:24:49.392 "medium_priority_weight": 0, 00:24:49.392 "high_priority_weight": 0, 00:24:49.392 "nvme_adminq_poll_period_us": 10000, 00:24:49.392 "nvme_ioq_poll_period_us": 0, 00:24:49.392 "io_queue_requests": 512, 00:24:49.392 "delay_cmd_submit": true, 00:24:49.392 "transport_retry_count": 4, 00:24:49.392 "bdev_retry_count": 3, 00:24:49.392 "transport_ack_timeout": 0, 00:24:49.392 "ctrlr_loss_timeout_sec": 0, 00:24:49.392 "reconnect_delay_sec": 0, 00:24:49.392 "fast_io_fail_timeout_sec": 0, 00:24:49.392 "disable_auto_failback": false, 00:24:49.392 "generate_uuids": false, 00:24:49.392 "transport_tos": 0, 00:24:49.392 "nvme_error_stat": false, 00:24:49.392 "rdma_srq_size": 0, 00:24:49.392 "io_path_stat": false, 00:24:49.392 "allow_accel_sequence": false, 00:24:49.392 "rdma_max_cq_size": 0, 00:24:49.392 "rdma_cm_event_timeout_ms": 0, 00:24:49.392 "dhchap_digests": [ 00:24:49.392 "sha256", 00:24:49.392 "sha384", 00:24:49.392 "sha512" 00:24:49.392 ], 00:24:49.392 "dhchap_dhgroups": [ 00:24:49.392 "null", 00:24:49.392 "ffdhe2048", 00:24:49.392 "ffdhe3072", 00:24:49.392 "ffdhe4096", 00:24:49.392 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.392 he6144", 00:24:49.392 "ffdhe8192" 00:24:49.392 ] 00:24:49.392 } 00:24:49.392 }, 00:24:49.392 { 00:24:49.392 "method": "bdev_nvme_attach_controller", 00:24:49.392 "params": { 00:24:49.392 "name": "TLSTEST", 00:24:49.392 "trtype": "TCP", 00:24:49.392 "adrfam": "IPv4", 00:24:49.392 "traddr": "10.0.0.2", 00:24:49.392 "trsvcid": "4420", 00:24:49.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.392 "prchk_reftag": false, 00:24:49.392 "prchk_guard": false, 00:24:49.392 "ctrlr_loss_timeout_sec": 0, 00:24:49.392 "reconnect_delay_sec": 0, 00:24:49.392 "fast_io_fail_timeout_sec": 0, 00:24:49.392 "psk": "/tmp/tmp.lx5ZwEtvWP", 00:24:49.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.392 "hdgst": false, 00:24:49.392 "ddgst": false 00:24:49.392 } 00:24:49.392 }, 00:24:49.392 { 00:24:49.392 "method": "bdev_nvme_set_hotplug", 00:24:49.392 "params": { 00:24:49.392 "period_us": 100000, 00:24:49.392 "enable": false 00:24:49.392 } 00:24:49.392 }, 00:24:49.392 { 00:24:49.392 "method": "bdev_wait_for_examine" 00:24:49.392 } 00:24:49.392 ] 00:24:49.392 }, 00:24:49.392 { 00:24:49.392 "subsystem": "nbd", 00:24:49.392 "config": [] 00:24:49.392 } 00:24:49.392 ] 00:24:49.392 }' 00:24:49.392 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.392 07:51:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.650 [2024-07-15 07:51:40.688125] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:49.650 [2024-07-15 07:51:40.688289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125400 ] 00:24:49.650 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.650 [2024-07-15 07:51:40.810937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.908 [2024-07-15 07:51:41.036175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.474 [2024-07-15 07:51:41.425832] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.474 [2024-07-15 07:51:41.426018] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:50.474 07:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.474 07:51:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:50.475 07:51:41 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:50.733 Running I/O for 10 seconds... 00:25:00.695 00:25:00.695 Latency(us) 00:25:00.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.695 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.695 Verification LBA range: start 0x0 length 0x2000 00:25:00.695 TLSTESTn1 : 10.04 2722.32 10.63 0.00 0.00 46907.96 8058.50 44661.57 00:25:00.695 =================================================================================================================== 00:25:00.696 Total : 2722.32 10.63 0.00 0.00 46907.96 8058.50 44661.57 00:25:00.696 0 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1125400 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1125400 ']' 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1125400 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125400 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125400' 00:25:00.696 killing process with pid 1125400 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1125400 00:25:00.696 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.696 00:25:00.696 Latency(us) 00:25:00.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.696 =================================================================================================================== 00:25:00.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.696 [2024-07-15 07:51:51.821089] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:00.696 07:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1125400 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1125259 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1125259 ']' 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1125259 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125259 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125259' 00:25:01.630 killing process with pid 1125259 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1125259 00:25:01.630 [2024-07-15 07:51:52.814210] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:01.630 07:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1125259 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1126941 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1126941 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1126941 ']' 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.527 07:51:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:03.527 [2024-07-15 07:51:54.392494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:03.527 [2024-07-15 07:51:54.392655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.527 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.527 [2024-07-15 07:51:54.527278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.786 [2024-07-15 07:51:54.783497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.786 [2024-07-15 07:51:54.783571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.786 [2024-07-15 07:51:54.783600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.786 [2024-07-15 07:51:54.783625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.786 [2024-07-15 07:51:54.783645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.786 [2024-07-15 07:51:54.783698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lx5ZwEtvWP 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lx5ZwEtvWP 00:25:04.352 07:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:04.610 [2024-07-15 07:51:55.585762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.610 07:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:04.867 07:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:05.125 [2024-07-15 07:51:56.123227] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.125 [2024-07-15 07:51:56.123553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.125 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:05.383 malloc0 00:25:05.383 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:05.641 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lx5ZwEtvWP 00:25:05.900 [2024-07-15 07:51:56.975269] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1127288 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1127288 /var/tmp/bdevperf.sock 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127288 ']' 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.900 07:51:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.900 [2024-07-15 07:51:57.070054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:05.900 [2024-07-15 07:51:57.070209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127288 ] 00:25:06.172 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.172 [2024-07-15 07:51:57.194754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.436 [2024-07-15 07:51:57.438730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.999 07:51:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.999 07:51:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:06.999 07:51:57 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lx5ZwEtvWP 00:25:07.257 07:51:58 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:07.257 [2024-07-15 07:51:58.456247] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.514 nvme0n1 00:25:07.514 07:51:58 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.514 Running I/O for 1 seconds... 00:25:08.900 00:25:08.900 Latency(us) 00:25:08.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.900 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:08.900 Verification LBA range: start 0x0 length 0x2000 00:25:08.900 nvme0n1 : 1.03 2605.85 10.18 0.00 0.00 48395.84 11262.48 52040.44 00:25:08.900 =================================================================================================================== 00:25:08.900 Total : 2605.85 10.18 0.00 0.00 48395.84 11262.48 52040.44 00:25:08.900 0 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1127288 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127288 ']' 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127288 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127288 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127288' 00:25:08.900 killing process with pid 1127288 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127288 00:25:08.900 Received shutdown signal, test time was about 1.000000 seconds 00:25:08.900 00:25:08.900 Latency(us) 00:25:08.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.900 =================================================================================================================== 00:25:08.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.900 07:51:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127288 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1126941 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1126941 ']' 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1126941 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1126941 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1126941' 00:25:09.839 killing process with pid 1126941 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1126941 00:25:09.839 [2024-07-15 07:52:00.868428] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:09.839 07:52:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1126941 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1127938 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1127938 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127938 ']' 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.216 07:52:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.216 [2024-07-15 07:52:02.405440] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:11.216 [2024-07-15 07:52:02.405587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.475 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.475 [2024-07-15 07:52:02.550906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.734 [2024-07-15 07:52:02.809942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.734 [2024-07-15 07:52:02.810023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.734 [2024-07-15 07:52:02.810054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.734 [2024-07-15 07:52:02.810080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.734 [2024-07-15 07:52:02.810102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.734 [2024-07-15 07:52:02.810149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.300 [2024-07-15 07:52:03.374079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.300 malloc0 00:25:12.300 [2024-07-15 07:52:03.449905] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.300 [2024-07-15 07:52:03.450266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1128113 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1128113 /var/tmp/bdevperf.sock 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128113 ']' 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.300 07:52:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.558 [2024-07-15 07:52:03.557154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:12.558 [2024-07-15 07:52:03.557316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128113 ] 00:25:12.558 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.558 [2024-07-15 07:52:03.687031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.815 [2024-07-15 07:52:03.941561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.380 07:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.380 07:52:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:13.380 07:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lx5ZwEtvWP 00:25:13.638 07:52:04 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:13.895 [2024-07-15 07:52:04.959423] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.895 nvme0n1 00:25:13.895 07:52:05 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.152 Running I/O for 1 seconds... 00:25:15.083 00:25:15.083 Latency(us) 00:25:15.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.083 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:15.083 Verification LBA range: start 0x0 length 0x2000 00:25:15.083 nvme0n1 : 1.03 2289.38 8.94 0.00 0.00 55246.29 10097.40 57477.50 00:25:15.083 =================================================================================================================== 00:25:15.083 Total : 2289.38 8.94 0.00 0.00 55246.29 10097.40 57477.50 00:25:15.083 0 00:25:15.083 07:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:15.083 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.083 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.339 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.339 07:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:15.339 "subsystems": [ 00:25:15.339 { 00:25:15.339 "subsystem": "keyring", 00:25:15.339 "config": [ 00:25:15.339 { 00:25:15.339 "method": "keyring_file_add_key", 00:25:15.339 "params": { 00:25:15.339 "name": "key0", 00:25:15.339 "path": "/tmp/tmp.lx5ZwEtvWP" 00:25:15.339 } 00:25:15.339 } 00:25:15.339 ] 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "subsystem": "iobuf", 00:25:15.339 "config": [ 00:25:15.339 { 00:25:15.339 "method": "iobuf_set_options", 00:25:15.339 "params": { 00:25:15.339 "small_pool_count": 8192, 00:25:15.339 "large_pool_count": 1024, 00:25:15.339 "small_bufsize": 8192, 00:25:15.339 "large_bufsize": 135168 00:25:15.339 } 00:25:15.339 } 00:25:15.339 ] 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "subsystem": "sock", 00:25:15.339 "config": [ 00:25:15.339 { 00:25:15.339 "method": "sock_set_default_impl", 00:25:15.339 "params": { 00:25:15.339 "impl_name": "posix" 00:25:15.339 } 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "method": "sock_impl_set_options", 00:25:15.339 "params": { 00:25:15.339 "impl_name": "ssl", 00:25:15.339 "recv_buf_size": 4096, 00:25:15.339 "send_buf_size": 4096, 00:25:15.339 "enable_recv_pipe": true, 00:25:15.339 "enable_quickack": false, 00:25:15.339 "enable_placement_id": 0, 00:25:15.339 "enable_zerocopy_send_server": true, 00:25:15.339 "enable_zerocopy_send_client": false, 00:25:15.339 "zerocopy_threshold": 0, 00:25:15.339 "tls_version": 0, 00:25:15.339 "enable_ktls": false 00:25:15.339 } 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "method": "sock_impl_set_options", 00:25:15.339 "params": { 00:25:15.339 "impl_name": "posix", 00:25:15.339 "recv_buf_size": 2097152, 00:25:15.339 "send_buf_size": 2097152, 00:25:15.339 "enable_recv_pipe": true, 00:25:15.339 "enable_quickack": false, 00:25:15.339 "enable_placement_id": 0, 00:25:15.339 "enable_zerocopy_send_server": true, 00:25:15.339 "enable_zerocopy_send_client": false, 00:25:15.339 "zerocopy_threshold": 0, 00:25:15.339 "tls_version": 0, 00:25:15.339 "enable_ktls": false 00:25:15.339 } 00:25:15.339 } 00:25:15.339 ] 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "subsystem": "vmd", 00:25:15.339 "config": [] 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "subsystem": "accel", 00:25:15.339 "config": [ 00:25:15.339 { 00:25:15.339 "method": "accel_set_options", 00:25:15.339 "params": { 00:25:15.339 "small_cache_size": 128, 00:25:15.339 "large_cache_size": 16, 00:25:15.339 "task_count": 2048, 00:25:15.339 "sequence_count": 2048, 00:25:15.339 "buf_count": 2048 00:25:15.339 } 00:25:15.339 } 00:25:15.339 ] 00:25:15.339 }, 00:25:15.339 { 00:25:15.339 "subsystem": "bdev", 00:25:15.339 "config": [ 00:25:15.339 { 00:25:15.339 "method": "bdev_set_options", 00:25:15.339 "params": { 00:25:15.339 "bdev_io_pool_size": 65535, 00:25:15.339 "bdev_io_cache_size": 256, 00:25:15.339 "bdev_auto_examine": true, 00:25:15.339 "iobuf_small_cache_size": 128, 00:25:15.339 "iobuf_large_cache_size": 16 00:25:15.339 } 00:25:15.339 }, 00:25:15.340 { 00:25:15.340 "method": "bdev_raid_set_options", 00:25:15.340 "params": { 00:25:15.340 "process_window_size_kb": 1024 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "bdev_iscsi_set_options", 00:25:15.340 "params": { 00:25:15.340 "timeout_sec": 30 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "bdev_nvme_set_options", 00:25:15.340 "params": { 00:25:15.340 "action_on_timeout": "none", 00:25:15.340 "timeout_us": 0, 00:25:15.340 "timeout_admin_us": 0, 00:25:15.340 "keep_alive_timeout_ms": 10000, 00:25:15.340 "arbitration_burst": 0, 00:25:15.340 "low_priority_weight": 0, 00:25:15.340 "medium_priority_weight": 0, 00:25:15.340 "high_priority_weight": 0, 00:25:15.340 "nvme_adminq_poll_period_us": 10000, 00:25:15.340 "nvme_ioq_poll_period_us": 0, 00:25:15.340 "io_queue_requests": 0, 00:25:15.340 "delay_cmd_submit": true, 00:25:15.340 "transport_retry_count": 4, 00:25:15.340 "bdev_retry_count": 3, 00:25:15.340 "transport_ack_timeout": 0, 00:25:15.340 "ctrlr_loss_timeout_sec": 0, 00:25:15.340 "reconnect_delay_sec": 0, 00:25:15.340 "fast_io_fail_timeout_sec": 0, 00:25:15.340 "disable_auto_failback": false, 00:25:15.340 "generate_uuids": false, 00:25:15.340 "transport_tos": 0, 00:25:15.340 "nvme_error_stat": false, 00:25:15.340 "rdma_srq_size": 0, 00:25:15.340 "io_path_stat": false, 00:25:15.340 "allow_accel_sequence": false, 00:25:15.340 "rdma_max_cq_size": 0, 00:25:15.340 "rdma_cm_event_timeout_ms": 0, 00:25:15.340 "dhchap_digests": [ 00:25:15.340 "sha256", 00:25:15.340 "sha384", 00:25:15.340 "sha512" 00:25:15.340 ], 00:25:15.340 "dhchap_dhgroups": [ 00:25:15.340 "null", 00:25:15.340 "ffdhe2048", 00:25:15.340 "ffdhe3072", 00:25:15.340 "ffdhe4096", 00:25:15.340 "ffdhe6144", 00:25:15.340 "ffdhe8192" 00:25:15.340 ] 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "bdev_nvme_set_hotplug", 00:25:15.340 "params": { 00:25:15.340 "period_us": 100000, 00:25:15.340 "enable": false 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "bdev_malloc_create", 00:25:15.340 "params": { 00:25:15.340 "name": "malloc0", 00:25:15.340 "num_blocks": 8192, 00:25:15.340 "block_size": 4096, 00:25:15.340 "physical_block_size": 4096, 00:25:15.340 "uuid": "84197731-985a-4dcf-8551-b8a84153cffe", 00:25:15.340 "optimal_io_boundary": 0 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "bdev_wait_for_examine" 00:25:15.340 } 00:25:15.340 ] 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "subsystem": "nbd", 00:25:15.340 "config": [] 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "subsystem": "scheduler", 00:25:15.340 "config": [ 00:25:15.340 { 00:25:15.340 "method": "framework_set_scheduler", 00:25:15.340 "params": { 00:25:15.340 "name": "static" 00:25:15.340 } 00:25:15.340 } 00:25:15.340 ] 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "subsystem": "nvmf", 00:25:15.340 "config": [ 00:25:15.340 { 00:25:15.340 "method": "nvmf_set_config", 00:25:15.340 "params": { 00:25:15.340 "discovery_filter": "match_any", 00:25:15.340 "admin_cmd_passthru": { 00:25:15.340 "identify_ctrlr": false 00:25:15.340 } 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_set_max_subsystems", 00:25:15.340 "params": { 00:25:15.340 "max_subsystems": 1024 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_set_crdt", 00:25:15.340 "params": { 00:25:15.340 "crdt1": 0, 00:25:15.340 "crdt2": 0, 00:25:15.340 "crdt3": 0 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_create_transport", 00:25:15.340 "params": { 00:25:15.340 "trtype": "TCP", 00:25:15.340 "max_queue_depth": 128, 00:25:15.340 "max_io_qpairs_per_ctrlr": 127, 00:25:15.340 "in_capsule_data_size": 4096, 00:25:15.340 "max_io_size": 131072, 00:25:15.340 "io_unit_size": 131072, 00:25:15.340 "max_aq_depth": 128, 00:25:15.340 "num_shared_buffers": 511, 00:25:15.340 "buf_cache_size": 4294967295, 00:25:15.340 "dif_insert_or_strip": false, 00:25:15.340 "zcopy": false, 00:25:15.340 "c2h_success": false, 00:25:15.340 "sock_priority": 0, 00:25:15.340 "abort_timeout_sec": 1, 00:25:15.340 "ack_timeout": 0, 00:25:15.340 "data_wr_pool_size": 0 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_create_subsystem", 00:25:15.340 "params": { 00:25:15.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.340 "allow_any_host": false, 00:25:15.340 "serial_number": "00000000000000000000", 00:25:15.340 "model_number": "SPDK bdev Controller", 00:25:15.340 "max_namespaces": 32, 00:25:15.340 "min_cntlid": 1, 00:25:15.340 "max_cntlid": 65519, 00:25:15.340 "ana_reporting": false 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_subsystem_add_host", 00:25:15.340 "params": { 00:25:15.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.340 "host": "nqn.2016-06.io.spdk:host1", 00:25:15.340 "psk": "key0" 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_subsystem_add_ns", 00:25:15.340 "params": { 00:25:15.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.340 "namespace": { 00:25:15.340 "nsid": 1, 00:25:15.340 "bdev_name": "malloc0", 00:25:15.340 "nguid": "84197731985A4DCF8551B8A84153CFFE", 00:25:15.340 "uuid": "84197731-985a-4dcf-8551-b8a84153cffe", 00:25:15.340 "no_auto_visible": false 00:25:15.340 } 00:25:15.340 } 00:25:15.340 }, 00:25:15.340 { 00:25:15.340 "method": "nvmf_subsystem_add_listener", 00:25:15.340 "params": { 00:25:15.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.340 "listen_address": { 00:25:15.340 "trtype": "TCP", 00:25:15.340 "adrfam": "IPv4", 00:25:15.340 "traddr": "10.0.0.2", 00:25:15.340 "trsvcid": "4420" 00:25:15.340 }, 00:25:15.340 "secure_channel": true 00:25:15.340 } 00:25:15.340 } 00:25:15.340 ] 00:25:15.340 } 00:25:15.340 ] 00:25:15.340 }' 00:25:15.340 07:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:15.597 07:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:15.597 "subsystems": [ 00:25:15.597 { 00:25:15.597 "subsystem": "keyring", 00:25:15.597 "config": [ 00:25:15.597 { 00:25:15.597 "method": "keyring_file_add_key", 00:25:15.597 "params": { 00:25:15.597 "name": "key0", 00:25:15.597 "path": "/tmp/tmp.lx5ZwEtvWP" 00:25:15.597 } 00:25:15.597 } 00:25:15.597 ] 00:25:15.597 }, 00:25:15.597 { 00:25:15.597 "subsystem": "iobuf", 00:25:15.597 "config": [ 00:25:15.597 { 00:25:15.597 "method": "iobuf_set_options", 00:25:15.597 "params": { 00:25:15.597 "small_pool_count": 8192, 00:25:15.597 "large_pool_count": 1024, 00:25:15.597 "small_bufsize": 8192, 00:25:15.597 "large_bufsize": 135168 00:25:15.597 } 00:25:15.597 } 00:25:15.597 ] 00:25:15.597 }, 00:25:15.597 { 00:25:15.597 "subsystem": "sock", 00:25:15.597 "config": [ 00:25:15.597 { 00:25:15.597 "method": "sock_set_default_impl", 00:25:15.597 "params": { 00:25:15.597 "impl_name": "posix" 00:25:15.597 } 00:25:15.597 }, 00:25:15.597 { 00:25:15.597 "method": "sock_impl_set_options", 00:25:15.597 "params": { 00:25:15.597 "impl_name": "ssl", 00:25:15.598 "recv_buf_size": 4096, 00:25:15.598 "send_buf_size": 4096, 00:25:15.598 "enable_recv_pipe": true, 00:25:15.598 "enable_quickack": false, 00:25:15.598 "enable_placement_id": 0, 00:25:15.598 "enable_zerocopy_send_server": true, 00:25:15.598 "enable_zerocopy_send_client": false, 00:25:15.598 "zerocopy_threshold": 0, 00:25:15.598 "tls_version": 0, 00:25:15.598 "enable_ktls": false 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "sock_impl_set_options", 00:25:15.598 "params": { 00:25:15.598 "impl_name": "posix", 00:25:15.598 "recv_buf_size": 2097152, 00:25:15.598 "send_buf_size": 2097152, 00:25:15.598 "enable_recv_pipe": true, 00:25:15.598 "enable_quickack": false, 00:25:15.598 "enable_placement_id": 0, 00:25:15.598 "enable_zerocopy_send_server": true, 00:25:15.598 "enable_zerocopy_send_client": false, 00:25:15.598 "zerocopy_threshold": 0, 00:25:15.598 "tls_version": 0, 00:25:15.598 "enable_ktls": false 00:25:15.598 } 00:25:15.598 } 00:25:15.598 ] 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "subsystem": "vmd", 00:25:15.598 "config": [] 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "subsystem": "accel", 00:25:15.598 "config": [ 00:25:15.598 { 00:25:15.598 "method": "accel_set_options", 00:25:15.598 "params": { 00:25:15.598 "small_cache_size": 128, 00:25:15.598 "large_cache_size": 16, 00:25:15.598 "task_count": 2048, 00:25:15.598 "sequence_count": 2048, 00:25:15.598 "buf_count": 2048 00:25:15.598 } 00:25:15.598 } 00:25:15.598 ] 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "subsystem": "bdev", 00:25:15.598 "config": [ 00:25:15.598 { 00:25:15.598 "method": "bdev_set_options", 00:25:15.598 "params": { 00:25:15.598 "bdev_io_pool_size": 65535, 00:25:15.598 "bdev_io_cache_size": 256, 00:25:15.598 "bdev_auto_examine": true, 00:25:15.598 "iobuf_small_cache_size": 128, 00:25:15.598 "iobuf_large_cache_size": 16 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_raid_set_options", 00:25:15.598 "params": { 00:25:15.598 "process_window_size_kb": 1024 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_iscsi_set_options", 00:25:15.598 "params": { 00:25:15.598 "timeout_sec": 30 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_nvme_set_options", 00:25:15.598 "params": { 00:25:15.598 "action_on_timeout": "none", 00:25:15.598 "timeout_us": 0, 00:25:15.598 "timeout_admin_us": 0, 00:25:15.598 "keep_alive_timeout_ms": 10000, 00:25:15.598 "arbitration_burst": 0, 00:25:15.598 "low_priority_weight": 0, 00:25:15.598 "medium_priority_weight": 0, 00:25:15.598 "high_priority_weight": 0, 00:25:15.598 "nvme_adminq_poll_period_us": 10000, 00:25:15.598 "nvme_ioq_poll_period_us": 0, 00:25:15.598 "io_queue_requests": 512, 00:25:15.598 "delay_cmd_submit": true, 00:25:15.598 "transport_retry_count": 4, 00:25:15.598 "bdev_retry_count": 3, 00:25:15.598 "transport_ack_timeout": 0, 00:25:15.598 "ctrlr_loss_timeout_sec": 0, 00:25:15.598 "reconnect_delay_sec": 0, 00:25:15.598 "fast_io_fail_timeout_sec": 0, 00:25:15.598 "disable_auto_failback": false, 00:25:15.598 "generate_uuids": false, 00:25:15.598 "transport_tos": 0, 00:25:15.598 "nvme_error_stat": false, 00:25:15.598 "rdma_srq_size": 0, 00:25:15.598 "io_path_stat": false, 00:25:15.598 "allow_accel_sequence": false, 00:25:15.598 "rdma_max_cq_size": 0, 00:25:15.598 "rdma_cm_event_timeout_ms": 0, 00:25:15.598 "dhchap_digests": [ 00:25:15.598 "sha256", 00:25:15.598 "sha384", 00:25:15.598 "sha512" 00:25:15.598 ], 00:25:15.598 "dhchap_dhgroups": [ 00:25:15.598 "null", 00:25:15.598 "ffdhe2048", 00:25:15.598 "ffdhe3072", 00:25:15.598 "ffdhe4096", 00:25:15.598 "ffdhe6144", 00:25:15.598 "ffdhe8192" 00:25:15.598 ] 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_nvme_attach_controller", 00:25:15.598 "params": { 00:25:15.598 "name": "nvme0", 00:25:15.598 "trtype": "TCP", 00:25:15.598 "adrfam": "IPv4", 00:25:15.598 "traddr": "10.0.0.2", 00:25:15.598 "trsvcid": "4420", 00:25:15.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.598 "prchk_reftag": false, 00:25:15.598 "prchk_guard": false, 00:25:15.598 "ctrlr_loss_timeout_sec": 0, 00:25:15.598 "reconnect_delay_sec": 0, 00:25:15.598 "fast_io_fail_timeout_sec": 0, 00:25:15.598 "psk": "key0", 00:25:15.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.598 "hdgst": false, 00:25:15.598 "ddgst": false 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_nvme_set_hotplug", 00:25:15.598 "params": { 00:25:15.598 "period_us": 100000, 00:25:15.598 "enable": false 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_enable_histogram", 00:25:15.598 "params": { 00:25:15.598 "name": "nvme0n1", 00:25:15.598 "enable": true 00:25:15.598 } 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "method": "bdev_wait_for_examine" 00:25:15.598 } 00:25:15.598 ] 00:25:15.598 }, 00:25:15.598 { 00:25:15.598 "subsystem": "nbd", 00:25:15.598 "config": [] 00:25:15.598 } 00:25:15.598 ] 00:25:15.598 }' 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1128113 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128113 ']' 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128113 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128113 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128113' 00:25:15.598 killing process with pid 1128113 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128113 00:25:15.598 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.598 00:25:15.598 Latency(us) 00:25:15.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.598 =================================================================================================================== 00:25:15.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.598 07:52:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128113 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1127938 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127938 ']' 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127938 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127938 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127938' 00:25:16.967 killing process with pid 1127938 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127938 00:25:16.967 07:52:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127938 00:25:18.340 07:52:09 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:18.340 07:52:09 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:18.340 "subsystems": [ 00:25:18.340 { 00:25:18.340 "subsystem": "keyring", 00:25:18.340 "config": [ 00:25:18.340 { 00:25:18.340 "method": "keyring_file_add_key", 00:25:18.340 "params": { 00:25:18.340 "name": "key0", 00:25:18.340 "path": "/tmp/tmp.lx5ZwEtvWP" 00:25:18.340 } 00:25:18.340 } 00:25:18.340 ] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "iobuf", 00:25:18.340 "config": [ 00:25:18.340 { 00:25:18.340 "method": "iobuf_set_options", 00:25:18.340 "params": { 00:25:18.340 "small_pool_count": 8192, 00:25:18.340 "large_pool_count": 1024, 00:25:18.340 "small_bufsize": 8192, 00:25:18.340 "large_bufsize": 135168 00:25:18.340 } 00:25:18.340 } 00:25:18.340 ] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "sock", 00:25:18.340 "config": [ 00:25:18.340 { 00:25:18.340 "method": "sock_set_default_impl", 00:25:18.340 "params": { 00:25:18.340 "impl_name": "posix" 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "sock_impl_set_options", 00:25:18.340 "params": { 00:25:18.340 "impl_name": "ssl", 00:25:18.340 "recv_buf_size": 4096, 00:25:18.340 "send_buf_size": 4096, 00:25:18.340 "enable_recv_pipe": true, 00:25:18.340 "enable_quickack": false, 00:25:18.340 "enable_placement_id": 0, 00:25:18.340 "enable_zerocopy_send_server": true, 00:25:18.340 "enable_zerocopy_send_client": false, 00:25:18.340 "zerocopy_threshold": 0, 00:25:18.340 "tls_version": 0, 00:25:18.340 "enable_ktls": false 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "sock_impl_set_options", 00:25:18.340 "params": { 00:25:18.340 "impl_name": "posix", 00:25:18.340 "recv_buf_size": 2097152, 00:25:18.340 "send_buf_size": 2097152, 00:25:18.340 "enable_recv_pipe": true, 00:25:18.340 "enable_quickack": false, 00:25:18.340 "enable_placement_id": 0, 00:25:18.340 "enable_zerocopy_send_server": true, 00:25:18.340 "enable_zerocopy_send_client": false, 00:25:18.340 "zerocopy_threshold": 0, 00:25:18.340 "tls_version": 0, 00:25:18.340 "enable_ktls": false 00:25:18.340 } 00:25:18.340 } 00:25:18.340 ] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "vmd", 00:25:18.340 "config": [] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "accel", 00:25:18.340 "config": [ 00:25:18.340 { 00:25:18.340 "method": "accel_set_options", 00:25:18.340 "params": { 00:25:18.340 "small_cache_size": 128, 00:25:18.340 "large_cache_size": 16, 00:25:18.340 "task_count": 2048, 00:25:18.340 "sequence_count": 2048, 00:25:18.340 "buf_count": 2048 00:25:18.340 } 00:25:18.340 } 00:25:18.340 ] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "bdev", 00:25:18.340 "config": [ 00:25:18.340 { 00:25:18.340 "method": "bdev_set_options", 00:25:18.340 "params": { 00:25:18.340 "bdev_io_pool_size": 65535, 00:25:18.340 "bdev_io_cache_size": 256, 00:25:18.340 "bdev_auto_examine": true, 00:25:18.340 "iobuf_small_cache_size": 128, 00:25:18.340 "iobuf_large_cache_size": 16 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "bdev_raid_set_options", 00:25:18.340 "params": { 00:25:18.340 "process_window_size_kb": 1024 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "bdev_iscsi_set_options", 00:25:18.340 "params": { 00:25:18.340 "timeout_sec": 30 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "bdev_nvme_set_options", 00:25:18.340 "params": { 00:25:18.340 "action_on_timeout": "none", 00:25:18.340 "timeout_us": 0, 00:25:18.340 "timeout_admin_us": 0, 00:25:18.340 "keep_alive_timeout_ms": 10000, 00:25:18.340 "arbitration_burst": 0, 00:25:18.340 "low_priority_weight": 0, 00:25:18.340 "medium_priority_weight": 0, 00:25:18.340 "high_priority_weight": 0, 00:25:18.340 "nvme_adminq_poll_period_us": 10000, 00:25:18.340 "nvme_ioq_poll_period_us": 0, 00:25:18.340 "io_queue_requests": 0, 00:25:18.340 "delay_cmd_submit": true, 00:25:18.340 "transport_retry_count": 4, 00:25:18.340 "bdev_retry_count": 3, 00:25:18.340 "transport_ack_timeout": 0, 00:25:18.340 "ctrlr_loss_timeout_sec": 0, 00:25:18.340 "reconnect_delay_sec": 0, 00:25:18.340 "fast_io_fail_timeout_sec": 0, 00:25:18.340 "disable_auto_failback": false, 00:25:18.340 "generate_uuids": false, 00:25:18.340 "transport_tos": 0, 00:25:18.340 "nvme_error_stat": false, 00:25:18.340 "rdma_srq_size": 0, 00:25:18.340 "io_path_stat": false, 00:25:18.340 "allow_accel_sequence": false, 00:25:18.340 "rdma_max_cq_size": 0, 00:25:18.340 "rdma_cm_event_timeout_ms": 0, 00:25:18.340 "dhchap_digests": [ 00:25:18.340 "sha256", 00:25:18.340 "sha384", 00:25:18.340 "sha512" 00:25:18.340 ], 00:25:18.340 "dhchap_dhgroups": [ 00:25:18.340 "null", 00:25:18.340 "ffdhe2048", 00:25:18.340 "ffdhe3072", 00:25:18.340 "ffdhe4096", 00:25:18.340 "ffdhe6144", 00:25:18.340 "ffdhe8192" 00:25:18.340 ] 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "bdev_nvme_set_hotplug", 00:25:18.340 "params": { 00:25:18.340 "period_us": 100000, 00:25:18.340 "enable": false 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "bdev_malloc_create", 00:25:18.340 "params": { 00:25:18.340 "name": "malloc0", 00:25:18.340 "num_blocks": 8192, 00:25:18.340 "block_size": 4096, 00:25:18.340 "physical_block_size": 4096, 00:25:18.340 "uuid": "84197731-985a-4dcf-8551-b8a84153cffe", 00:25:18.340 "optimal_io_boundary": 0 00:25:18.340 } 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "method": "bdev_wait_for_examine" 00:25:18.340 } 00:25:18.340 ] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "nbd", 00:25:18.340 "config": [] 00:25:18.340 }, 00:25:18.340 { 00:25:18.340 "subsystem": "scheduler", 00:25:18.340 "config": [ 00:25:18.340 { 00:25:18.340 "method": "framework_set_scheduler", 00:25:18.340 "params": { 00:25:18.340 "name": "static" 00:25:18.340 } 00:25:18.340 } 00:25:18.340 ] 00:25:18.340 }, 00:25:18.340 { 00:25:18.341 "subsystem": "nvmf", 00:25:18.341 "config": [ 00:25:18.341 { 00:25:18.341 "method": "nvmf_set_config", 00:25:18.341 "params": { 00:25:18.341 "discovery_filter": "match_any", 00:25:18.341 "admin_cmd_passthru": { 00:25:18.341 "identify_ctrlr": false 00:25:18.341 } 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_set_max_subsystems", 00:25:18.341 "params": { 00:25:18.341 "max_subsystems": 1024 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_set_crdt", 00:25:18.341 "params": { 00:25:18.341 "crdt1": 0, 00:25:18.341 "crdt2": 0, 00:25:18.341 "crdt3": 0 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_create_transport", 00:25:18.341 "params": { 00:25:18.341 "trtype": "TCP", 00:25:18.341 "max_queue_depth": 128, 00:25:18.341 "max_io_qpairs_per_ctrlr": 127, 00:25:18.341 "in_capsule_data_size": 4096, 00:25:18.341 "max_io_size": 131072, 00:25:18.341 "io_unit_size": 131072, 00:25:18.341 "max_aq_depth": 128, 00:25:18.341 "num_shared_buffers": 511, 00:25:18.341 "buf_cache_size": 4294967295, 00:25:18.341 "dif_insert_or_strip": false, 00:25:18.341 "zcopy": false, 00:25:18.341 "c2h_success": false, 00:25:18.341 "sock_priority": 0, 00:25:18.341 "abort_timeout_sec": 1, 00:25:18.341 "ack_timeout": 0, 00:25:18.341 "data_wr_pool_size": 0 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_create_subsystem", 00:25:18.341 "params": { 00:25:18.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.341 "allow_any_host": false, 00:25:18.341 "serial_number": "00000000000000000000", 00:25:18.341 "model_number": "SPDK bdev Controller", 00:25:18.341 "max_namespaces": 32, 00:25:18.341 "min_cntlid": 1, 00:25:18.341 "max_cntlid": 65519, 00:25:18.341 "ana_reporting": false 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_subsystem_add_host", 00:25:18.341 "params": { 00:25:18.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.341 "host": "nqn.2016-06.io.spdk:host1", 00:25:18.341 "psk": "key0" 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_subsystem_add_ns", 00:25:18.341 "params": { 00:25:18.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.341 "namespace": { 00:25:18.341 "nsid": 1, 00:25:18.341 "bdev_name": "malloc0", 00:25:18.341 "nguid": "84197731985A4DCF8551B8A84153CFFE", 00:25:18.341 "uuid": "84197731-985a-4dcf-8551-b8a84153cffe", 00:25:18.341 "no_auto_visible": false 00:25:18.341 } 00:25:18.341 } 00:25:18.341 }, 00:25:18.341 { 00:25:18.341 "method": "nvmf_subsystem_add_listener", 00:25:18.341 "params": { 00:25:18.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.341 "listen_address": { 00:25:18.341 "trtype": "TCP", 00:25:18.341 "adrfam": "IPv4", 00:25:18.341 "traddr": "10.0.0.2", 00:25:18.341 "trsvcid": "4420" 00:25:18.341 }, 00:25:18.341 "secure_channel": true 00:25:18.341 } 00:25:18.341 } 00:25:18.341 ] 00:25:18.341 } 00:25:18.341 ] 00:25:18.341 }' 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1128793 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1128793 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128793 ']' 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.341 07:52:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.341 [2024-07-15 07:52:09.361442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:18.341 [2024-07-15 07:52:09.361592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.341 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.341 [2024-07-15 07:52:09.492525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.599 [2024-07-15 07:52:09.740322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.599 [2024-07-15 07:52:09.740411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.599 [2024-07-15 07:52:09.740442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.599 [2024-07-15 07:52:09.740468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.599 [2024-07-15 07:52:09.740490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.599 [2024-07-15 07:52:09.740648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.165 [2024-07-15 07:52:10.299741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.165 [2024-07-15 07:52:10.331734] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:19.165 [2024-07-15 07:52:10.332083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1128941 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1128941 /var/tmp/bdevperf.sock 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128941 ']' 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.165 07:52:10 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:19.165 "subsystems": [ 00:25:19.165 { 00:25:19.165 "subsystem": "keyring", 00:25:19.165 "config": [ 00:25:19.165 { 00:25:19.165 "method": "keyring_file_add_key", 00:25:19.165 "params": { 00:25:19.165 "name": "key0", 00:25:19.165 "path": "/tmp/tmp.lx5ZwEtvWP" 00:25:19.165 } 00:25:19.165 } 00:25:19.165 ] 00:25:19.165 }, 00:25:19.165 { 00:25:19.165 "subsystem": "iobuf", 00:25:19.165 "config": [ 00:25:19.165 { 00:25:19.165 "method": "iobuf_set_options", 00:25:19.165 "params": { 00:25:19.165 "small_pool_count": 8192, 00:25:19.165 "large_pool_count": 1024, 00:25:19.165 "small_bufsize": 8192, 00:25:19.165 "large_bufsize": 135168 00:25:19.165 } 00:25:19.165 } 00:25:19.165 ] 00:25:19.165 }, 00:25:19.165 { 00:25:19.165 "subsystem": "sock", 00:25:19.165 "config": [ 00:25:19.165 { 00:25:19.165 "method": "sock_set_default_impl", 00:25:19.165 "params": { 00:25:19.165 "impl_name": "posix" 00:25:19.165 } 00:25:19.165 }, 00:25:19.165 { 00:25:19.165 "method": "sock_impl_set_options", 00:25:19.165 "params": { 00:25:19.165 "impl_name": "ssl", 00:25:19.165 "recv_buf_size": 4096, 00:25:19.165 "send_buf_size": 4096, 00:25:19.165 "enable_recv_pipe": true, 00:25:19.165 "enable_quickack": false, 00:25:19.165 "enable_placement_id": 0, 00:25:19.165 "enable_zerocopy_send_server": true, 00:25:19.165 "enable_zerocopy_send_client": false, 00:25:19.165 "zerocopy_threshold": 0, 00:25:19.165 "tls_version": 0, 00:25:19.165 "enable_ktls": false 00:25:19.165 } 00:25:19.165 }, 00:25:19.165 { 00:25:19.165 "method": "sock_impl_set_options", 00:25:19.165 "params": { 00:25:19.165 "impl_name": "posix", 00:25:19.165 "recv_buf_size": 2097152, 00:25:19.165 "send_buf_size": 2097152, 00:25:19.165 "enable_recv_pipe": true, 00:25:19.165 "enable_quickack": false, 00:25:19.165 "enable_placement_id": 0, 00:25:19.165 "enable_zerocopy_send_server": true, 00:25:19.165 "enable_zerocopy_send_client": false, 00:25:19.165 "zerocopy_threshold": 0, 00:25:19.165 "tls_version": 0, 00:25:19.165 "enable_ktls": false 00:25:19.165 } 00:25:19.165 } 00:25:19.165 ] 00:25:19.165 }, 00:25:19.165 { 00:25:19.165 "subsystem": "vmd", 00:25:19.165 "config": [] 00:25:19.165 }, 00:25:19.165 { 00:25:19.165 "subsystem": "accel", 00:25:19.165 "config": [ 00:25:19.165 { 00:25:19.165 "method": "accel_set_options", 00:25:19.165 "params": { 00:25:19.165 "small_cache_size": 128, 00:25:19.165 "large_cache_size": 16, 00:25:19.165 "task_count": 2048, 00:25:19.165 "sequence_count": 2048, 00:25:19.165 "buf_count": 2048 00:25:19.165 } 00:25:19.165 } 00:25:19.165 ] 00:25:19.165 }, 00:25:19.166 { 00:25:19.166 "subsystem": "bdev", 00:25:19.166 "config": [ 00:25:19.166 { 00:25:19.166 "method": "bdev_set_options", 00:25:19.166 "params": { 00:25:19.166 "bdev_io_pool_size": 65535, 00:25:19.166 "bdev_io_cache_size": 256, 00:25:19.166 "bdev_auto_examine": true, 00:25:19.166 "iobuf_small_cache_size": 128, 00:25:19.166 "iobuf_large_cache_size": 16 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_raid_set_options", 00:25:19.166 "params": { 00:25:19.166 "process_window_size_kb": 1024 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_iscsi_set_options", 00:25:19.166 "params": { 00:25:19.166 "timeout_sec": 30 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_nvme_set_options", 00:25:19.166 "params": { 00:25:19.166 "action_on_timeout": "none", 00:25:19.166 "timeout_us": 0, 00:25:19.166 "timeout_admin_us": 0, 00:25:19.166 "keep_alive_timeout_ms": 10000, 00:25:19.166 "arbitration_burst": 0, 00:25:19.166 "low_priority_weight": 0, 00:25:19.166 "medium_priority_weight": 0, 00:25:19.166 "high_priority_weight": 0, 00:25:19.166 "nvme_adminq_poll_period_us": 10000, 00:25:19.166 "nvme_ioq_poll_period_us": 0, 00:25:19.166 "io_queue_requests": 512, 00:25:19.166 "delay_cmd_submit": true, 00:25:19.166 "transport_retry_count": 4, 00:25:19.166 "bdev_retry_count": 3, 00:25:19.166 "transport_ack_timeout": 0, 00:25:19.166 "ctrlr_loss_timeout_sec": 0, 00:25:19.166 "reconnect_delay_sec": 0, 00:25:19.166 "fast_io_fail_timeout_sec": 0, 00:25:19.166 "disable_auto_failback": false, 00:25:19.166 "generate_uuids": false, 00:25:19.166 "transport_tos": 0, 00:25:19.166 "nvme_error_stat": false, 00:25:19.166 "rdma_srq_size": 0, 00:25:19.166 "io_path_stat": false, 00:25:19.166 "allow_accel_sequence": false, 00:25:19.166 "rdma_max_cq_size": 0, 00:25:19.166 "rdma_cm_event_timeout_ms": 0, 00:25:19.166 "dhchap_digests": [ 00:25:19.166 "sha256", 00:25:19.166 "sha384", 00:25:19.166 "sh 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.166 a512" 00:25:19.166 ], 00:25:19.166 "dhchap_dhgroups": [ 00:25:19.166 "null", 00:25:19.166 "ffdhe2048", 00:25:19.166 "ffdhe3072", 00:25:19.166 "ffdhe4096", 00:25:19.166 "ffdhe6144", 00:25:19.166 "ffdhe8192" 00:25:19.166 ] 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_nvme_attach_controller", 00:25:19.166 "params": { 00:25:19.166 "name": "nvme0", 00:25:19.166 "trtype": "TCP", 00:25:19.166 "adrfam": "IPv4", 00:25:19.166 "traddr": "10.0.0.2", 00:25:19.166 "trsvcid": "4420", 00:25:19.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.166 "prchk_reftag": false, 00:25:19.166 "prchk_guard": false, 00:25:19.166 "ctrlr_loss_timeout_sec": 0, 00:25:19.166 "reconnect_delay_sec": 0, 00:25:19.166 "fast_io_fail_timeout_sec": 0, 00:25:19.166 "psk": "key0", 00:25:19.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.166 "hdgst": false, 00:25:19.166 "ddgst": false 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_nvme_set_hotplug", 00:25:19.166 "params": { 00:25:19.166 "period_us": 100000, 00:25:19.166 "enable": false 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_enable_histogram", 00:25:19.166 "params": { 00:25:19.166 "name": "nvme0n1", 00:25:19.166 "enable": true 00:25:19.166 } 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "method": "bdev_wait_for_examine" 00:25:19.166 } 00:25:19.166 ] 00:25:19.166 }, 00:25:19.166 { 00:25:19.166 "subsystem": "nbd", 00:25:19.166 "config": [] 00:25:19.166 } 00:25:19.166 ] 00:25:19.166 }' 00:25:19.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.166 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.166 07:52:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.425 [2024-07-15 07:52:10.466212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:19.425 [2024-07-15 07:52:10.466356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128941 ] 00:25:19.425 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.425 [2024-07-15 07:52:10.595706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.683 [2024-07-15 07:52:10.854351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.250 [2024-07-15 07:52:11.293510] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.250 07:52:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.250 07:52:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:20.250 07:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.250 07:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:20.506 07:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.506 07:52:11 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:20.764 Running I/O for 1 seconds... 00:25:21.697 00:25:21.697 Latency(us) 00:25:21.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:21.697 Verification LBA range: start 0x0 length 0x2000 00:25:21.697 nvme0n1 : 1.05 1958.28 7.65 0.00 0.00 63999.50 9757.58 64079.64 00:25:21.697 =================================================================================================================== 00:25:21.697 Total : 1958.28 7.65 0.00 0.00 63999.50 9757.58 64079.64 00:25:21.697 0 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:21.697 nvmf_trace.0 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1128941 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128941 ']' 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128941 00:25:21.697 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128941 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128941' 00:25:21.955 killing process with pid 1128941 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128941 00:25:21.955 Received shutdown signal, test time was about 1.000000 seconds 00:25:21.955 00:25:21.955 Latency(us) 00:25:21.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.955 =================================================================================================================== 00:25:21.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.955 07:52:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128941 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.887 rmmod nvme_tcp 00:25:22.887 rmmod nvme_fabrics 00:25:22.887 rmmod nvme_keyring 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1128793 ']' 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1128793 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128793 ']' 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128793 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128793 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128793' 00:25:22.887 killing process with pid 1128793 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128793 00:25:22.887 07:52:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128793 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.261 07:52:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.821 07:52:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:26.821 07:52:17 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aOWJHufoPx /tmp/tmp.2I5lmHAFT4 /tmp/tmp.lx5ZwEtvWP 00:25:26.821 00:25:26.821 real 1m50.403s 00:25:26.821 user 2m57.083s 00:25:26.821 sys 0m27.082s 00:25:26.821 07:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:26.821 07:52:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.821 ************************************ 00:25:26.821 END TEST nvmf_tls 00:25:26.821 ************************************ 00:25:26.821 07:52:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:26.821 07:52:17 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:26.821 07:52:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:26.821 07:52:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.821 07:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.821 ************************************ 00:25:26.821 START TEST nvmf_fips 00:25:26.821 ************************************ 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:26.821 * Looking for test storage... 00:25:26.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.821 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:26.822 Error setting digest 00:25:26.822 00324808E57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:26.822 00324808E57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:26.822 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.823 07:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.724 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.724 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:25:28.725 00:25:28.725 --- 10.0.0.2 ping statistics --- 00:25:28.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.725 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:25:28.725 00:25:28.725 --- 10.0.0.1 ping statistics --- 00:25:28.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.725 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1131442 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1131442 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1131442 ']' 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.725 07:52:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.725 [2024-07-15 07:52:19.816469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:28.725 [2024-07-15 07:52:19.816605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.725 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.983 [2024-07-15 07:52:19.958104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.241 [2024-07-15 07:52:20.217751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.241 [2024-07-15 07:52:20.217831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.241 [2024-07-15 07:52:20.217861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.241 [2024-07-15 07:52:20.217899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.241 [2024-07-15 07:52:20.217923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.241 [2024-07-15 07:52:20.217973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.499 07:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:29.757 [2024-07-15 07:52:20.941529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.757 [2024-07-15 07:52:20.957485] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:29.757 [2024-07-15 07:52:20.957818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.015 [2024-07-15 07:52:21.032522] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:30.015 malloc0 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1131598 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1131598 /var/tmp/bdevperf.sock 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1131598 ']' 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.015 07:52:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.015 [2024-07-15 07:52:21.170806] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:30.015 [2024-07-15 07:52:21.170952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131598 ] 00:25:30.015 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.273 [2024-07-15 07:52:21.292690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.531 [2024-07-15 07:52:21.526264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.097 07:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.097 07:52:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:31.097 07:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:31.355 [2024-07-15 07:52:22.363385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.355 [2024-07-15 07:52:22.363574] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:31.355 TLSTESTn1 00:25:31.355 07:52:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:31.614 Running I/O for 10 seconds... 00:25:41.575 00:25:41.575 Latency(us) 00:25:41.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.575 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.575 Verification LBA range: start 0x0 length 0x2000 00:25:41.575 TLSTESTn1 : 10.03 2119.32 8.28 0.00 0.00 60274.38 8446.86 74177.04 00:25:41.575 =================================================================================================================== 00:25:41.575 Total : 2119.32 8.28 0.00 0.00 60274.38 8446.86 74177.04 00:25:41.575 0 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:41.575 nvmf_trace.0 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1131598 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1131598 ']' 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1131598 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131598 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131598' 00:25:41.575 killing process with pid 1131598 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1131598 00:25:41.575 Received shutdown signal, test time was about 10.000000 seconds 00:25:41.575 00:25:41.575 Latency(us) 00:25:41.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.575 =================================================================================================================== 00:25:41.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.575 07:52:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1131598 00:25:41.575 [2024-07-15 07:52:32.753926] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.516 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.516 rmmod nvme_tcp 00:25:42.773 rmmod nvme_fabrics 00:25:42.773 rmmod nvme_keyring 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1131442 ']' 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1131442 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1131442 ']' 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1131442 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131442 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131442' 00:25:42.773 killing process with pid 1131442 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1131442 00:25:42.773 [2024-07-15 07:52:33.826034] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:42.773 07:52:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1131442 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.146 07:52:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.044 07:52:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.044 07:52:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:46.044 00:25:46.044 real 0m19.743s 00:25:46.044 user 0m23.578s 00:25:46.044 sys 0m6.686s 00:25:46.044 07:52:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:46.044 07:52:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:46.044 ************************************ 00:25:46.044 END TEST nvmf_fips 00:25:46.044 ************************************ 00:25:46.044 07:52:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:46.044 07:52:37 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:46.044 07:52:37 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:46.044 07:52:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:46.044 07:52:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.044 07:52:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.302 ************************************ 00:25:46.302 START TEST nvmf_fuzz 00:25:46.302 ************************************ 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:46.302 * Looking for test storage... 00:25:46.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.302 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.303 07:52:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.204 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:48.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:48.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:48.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:48.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.205 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:25:48.465 00:25:48.465 --- 10.0.0.2 ping statistics --- 00:25:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.465 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:25:48.465 00:25:48.465 --- 10.0.0.1 ping statistics --- 00:25:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.465 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1135111 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1135111 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1135111 ']' 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.465 07:52:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.428 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.428 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:49.428 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.428 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.428 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.429 Malloc0 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.429 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:49.687 07:52:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:21.767 Fuzzing completed. Shutting down the fuzz application 00:26:21.767 00:26:21.767 Dumping successful admin opcodes: 00:26:21.767 8, 9, 10, 24, 00:26:21.767 Dumping successful io opcodes: 00:26:21.767 0, 9, 00:26:21.767 NS: 0x200003aefec0 I/O qp, Total commands completed: 313437, total successful commands: 1848, random_seed: 3129780096 00:26:21.767 NS: 0x200003aefec0 admin qp, Total commands completed: 39488, total successful commands: 321, random_seed: 203094784 00:26:21.767 07:53:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:22.697 Fuzzing completed. Shutting down the fuzz application 00:26:22.697 00:26:22.697 Dumping successful admin opcodes: 00:26:22.697 24, 00:26:22.697 Dumping successful io opcodes: 00:26:22.697 00:26:22.697 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 601054351 00:26:22.697 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 601240624 00:26:22.954 07:53:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.955 rmmod nvme_tcp 00:26:22.955 rmmod nvme_fabrics 00:26:22.955 rmmod nvme_keyring 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1135111 ']' 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1135111 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1135111 ']' 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1135111 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:22.955 07:53:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1135111 00:26:22.955 07:53:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:22.955 07:53:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:22.955 07:53:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1135111' 00:26:22.955 killing process with pid 1135111 00:26:22.955 07:53:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1135111 00:26:22.955 07:53:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1135111 00:26:24.865 07:53:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.866 07:53:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.767 07:53:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.767 07:53:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:26.767 00:26:26.767 real 0m40.346s 00:26:26.767 user 0m57.832s 00:26:26.767 sys 0m13.414s 00:26:26.767 07:53:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:26.767 07:53:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:26.767 ************************************ 00:26:26.767 END TEST nvmf_fuzz 00:26:26.767 ************************************ 00:26:26.767 07:53:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:26.767 07:53:17 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:26.767 07:53:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:26.767 07:53:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.767 07:53:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:26.767 ************************************ 00:26:26.767 START TEST nvmf_multiconnection 00:26:26.767 ************************************ 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:26.767 * Looking for test storage... 00:26:26.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.767 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.768 07:53:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:28.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:28.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:28.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:28.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.665 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:26:28.666 00:26:28.666 --- 10.0.0.2 ping statistics --- 00:26:28.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.666 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:26:28.666 00:26:28.666 --- 10.0.0.1 ping statistics --- 00:26:28.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.666 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1141175 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1141175 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1141175 ']' 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.666 07:53:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.666 [2024-07-15 07:53:19.879329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:28.666 [2024-07-15 07:53:19.879465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.923 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.923 [2024-07-15 07:53:20.022971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.181 [2024-07-15 07:53:20.287648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.181 [2024-07-15 07:53:20.287727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.181 [2024-07-15 07:53:20.287765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.181 [2024-07-15 07:53:20.287786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.181 [2024-07-15 07:53:20.287810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.181 [2024-07-15 07:53:20.287930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.181 [2024-07-15 07:53:20.287988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.181 [2024-07-15 07:53:20.288034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.181 [2024-07-15 07:53:20.288045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.746 [2024-07-15 07:53:20.812978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.746 Malloc1 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.746 [2024-07-15 07:53:20.922336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.746 07:53:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 Malloc2 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 Malloc3 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 Malloc4 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.005 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.263 Malloc5 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.263 Malloc6 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.263 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.264 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.522 Malloc7 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.522 Malloc8 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.522 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.523 Malloc9 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.523 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 Malloc10 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 Malloc11 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.782 07:53:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:31.717 07:53:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:31.717 07:53:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:31.717 07:53:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:31.717 07:53:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:31.717 07:53:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.634 07:53:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:34.200 07:53:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:34.200 07:53:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:34.200 07:53:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:34.200 07:53:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:34.200 07:53:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.098 07:53:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:36.664 07:53:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:36.664 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:36.664 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.664 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:36.664 07:53:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.191 07:53:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:39.449 07:53:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:39.449 07:53:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:39.449 07:53:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:39.449 07:53:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:39.449 07:53:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.978 07:53:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:42.235 07:53:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:42.235 07:53:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:42.235 07:53:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.235 07:53:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:42.235 07:53:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.761 07:53:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:45.019 07:53:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:45.019 07:53:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:45.019 07:53:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:45.019 07:53:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:45.019 07:53:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.548 07:53:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:47.806 07:53:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:47.806 07:53:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:47.806 07:53:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.806 07:53:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:47.806 07:53:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.330 07:53:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:50.588 07:53:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:50.588 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:50.588 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:50.588 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:50.588 07:53:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.111 07:53:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:53.698 07:53:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:53.698 07:53:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:53.698 07:53:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.698 07:53:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:53.698 07:53:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.589 07:53:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:56.523 07:53:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:56.523 07:53:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:56.523 07:53:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:56.523 07:53:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:56.523 07:53:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.421 07:53:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:59.355 07:53:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:59.355 07:53:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:59.355 07:53:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.355 07:53:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:59.355 07:53:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:01.885 07:53:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:01.885 [global] 00:27:01.885 thread=1 00:27:01.885 invalidate=1 00:27:01.885 rw=read 00:27:01.885 time_based=1 00:27:01.885 runtime=10 00:27:01.885 ioengine=libaio 00:27:01.885 direct=1 00:27:01.885 bs=262144 00:27:01.885 iodepth=64 00:27:01.885 norandommap=1 00:27:01.885 numjobs=1 00:27:01.885 00:27:01.885 [job0] 00:27:01.885 filename=/dev/nvme0n1 00:27:01.885 [job1] 00:27:01.885 filename=/dev/nvme10n1 00:27:01.885 [job2] 00:27:01.885 filename=/dev/nvme1n1 00:27:01.885 [job3] 00:27:01.885 filename=/dev/nvme2n1 00:27:01.885 [job4] 00:27:01.885 filename=/dev/nvme3n1 00:27:01.885 [job5] 00:27:01.885 filename=/dev/nvme4n1 00:27:01.885 [job6] 00:27:01.885 filename=/dev/nvme5n1 00:27:01.885 [job7] 00:27:01.885 filename=/dev/nvme6n1 00:27:01.885 [job8] 00:27:01.885 filename=/dev/nvme7n1 00:27:01.885 [job9] 00:27:01.885 filename=/dev/nvme8n1 00:27:01.885 [job10] 00:27:01.885 filename=/dev/nvme9n1 00:27:01.885 Could not set queue depth (nvme0n1) 00:27:01.885 Could not set queue depth (nvme10n1) 00:27:01.885 Could not set queue depth (nvme1n1) 00:27:01.885 Could not set queue depth (nvme2n1) 00:27:01.885 Could not set queue depth (nvme3n1) 00:27:01.885 Could not set queue depth (nvme4n1) 00:27:01.885 Could not set queue depth (nvme5n1) 00:27:01.885 Could not set queue depth (nvme6n1) 00:27:01.885 Could not set queue depth (nvme7n1) 00:27:01.885 Could not set queue depth (nvme8n1) 00:27:01.885 Could not set queue depth (nvme9n1) 00:27:01.885 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.885 fio-3.35 00:27:01.885 Starting 11 threads 00:27:14.122 00:27:14.122 job0: (groupid=0, jobs=1): err= 0: pid=1145614: Mon Jul 15 07:54:03 2024 00:27:14.122 read: IOPS=622, BW=156MiB/s (163MB/s)(1579MiB/10141msec) 00:27:14.122 slat (usec): min=9, max=135884, avg=913.44, stdev=5062.99 00:27:14.122 clat (msec): min=3, max=353, avg=101.78, stdev=57.39 00:27:14.122 lat (msec): min=3, max=353, avg=102.69, stdev=58.06 00:27:14.122 clat percentiles (msec): 00:27:14.122 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 29], 20.00th=[ 48], 00:27:14.122 | 30.00th=[ 66], 40.00th=[ 85], 50.00th=[ 101], 60.00th=[ 113], 00:27:14.122 | 70.00th=[ 125], 80.00th=[ 146], 90.00th=[ 180], 95.00th=[ 209], 00:27:14.122 | 99.00th=[ 253], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 305], 00:27:14.122 | 99.99th=[ 355] 00:27:14.122 bw ( KiB/s): min=81408, max=278016, per=9.99%, avg=160051.20, stdev=47589.02, samples=20 00:27:14.122 iops : min= 318, max= 1086, avg=625.20, stdev=185.89, samples=20 00:27:14.122 lat (msec) : 4=0.17%, 10=1.63%, 20=3.20%, 50=15.77%, 100=28.96% 00:27:14.122 lat (msec) : 250=48.81%, 500=1.46% 00:27:14.122 cpu : usr=0.27%, sys=1.80%, ctx=1226, majf=0, minf=4097 00:27:14.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.122 issued rwts: total=6316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.122 job1: (groupid=0, jobs=1): err= 0: pid=1145615: Mon Jul 15 07:54:03 2024 00:27:14.122 read: IOPS=647, BW=162MiB/s (170MB/s)(1642MiB/10140msec) 00:27:14.122 slat (usec): min=10, max=95485, avg=1180.17, stdev=4993.08 00:27:14.122 clat (usec): min=1734, max=309416, avg=97563.80, stdev=58245.27 00:27:14.122 lat (usec): min=1787, max=309450, avg=98743.96, stdev=58978.55 00:27:14.122 clat percentiles (msec): 00:27:14.122 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 42], 00:27:14.122 | 30.00th=[ 58], 40.00th=[ 77], 50.00th=[ 90], 60.00th=[ 104], 00:27:14.122 | 70.00th=[ 115], 80.00th=[ 146], 90.00th=[ 190], 95.00th=[ 209], 00:27:14.122 | 99.00th=[ 243], 99.50th=[ 268], 99.90th=[ 292], 99.95th=[ 309], 00:27:14.122 | 99.99th=[ 309] 00:27:14.122 bw ( KiB/s): min=81408, max=382464, per=10.40%, avg=166476.80, stdev=75680.58, samples=20 00:27:14.122 iops : min= 318, max= 1494, avg=650.30, stdev=295.63, samples=20 00:27:14.122 lat (msec) : 2=0.03%, 4=1.08%, 10=0.87%, 20=2.27%, 50=21.43% 00:27:14.122 lat (msec) : 100=31.38%, 250=42.13%, 500=0.81% 00:27:14.122 cpu : usr=0.39%, sys=2.14%, ctx=1278, majf=0, minf=4097 00:27:14.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.122 issued rwts: total=6567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.122 job2: (groupid=0, jobs=1): err= 0: pid=1145616: Mon Jul 15 07:54:03 2024 00:27:14.122 read: IOPS=509, BW=127MiB/s (134MB/s)(1287MiB/10098msec) 00:27:14.122 slat (usec): min=9, max=73666, avg=1457.70, stdev=5314.78 00:27:14.122 clat (msec): min=3, max=305, avg=124.02, stdev=51.68 00:27:14.122 lat (msec): min=3, max=305, avg=125.48, stdev=52.46 00:27:14.122 clat percentiles (msec): 00:27:14.122 | 1.00th=[ 9], 5.00th=[ 40], 10.00th=[ 70], 20.00th=[ 85], 00:27:14.122 | 30.00th=[ 95], 40.00th=[ 108], 50.00th=[ 116], 60.00th=[ 126], 00:27:14.122 | 70.00th=[ 144], 80.00th=[ 171], 90.00th=[ 197], 95.00th=[ 222], 00:27:14.122 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 292], 99.95th=[ 296], 00:27:14.122 | 99.99th=[ 305] 00:27:14.122 bw ( KiB/s): min=68096, max=242688, per=8.13%, avg=130124.80, stdev=42402.64, samples=20 00:27:14.122 iops : min= 266, max= 948, avg=508.30, stdev=165.64, samples=20 00:27:14.122 lat (msec) : 4=0.14%, 10=1.15%, 20=0.70%, 50=4.02%, 100=27.24% 00:27:14.122 lat (msec) : 250=66.04%, 500=0.72% 00:27:14.122 cpu : usr=0.29%, sys=1.82%, ctx=1053, majf=0, minf=3721 00:27:14.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.122 issued rwts: total=5147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.122 job3: (groupid=0, jobs=1): err= 0: pid=1145617: Mon Jul 15 07:54:03 2024 00:27:14.122 read: IOPS=589, BW=147MiB/s (155MB/s)(1489MiB/10098msec) 00:27:14.122 slat (usec): min=14, max=100275, avg=1464.15, stdev=5003.02 00:27:14.122 clat (usec): min=1045, max=314845, avg=106991.91, stdev=52404.91 00:27:14.122 lat (usec): min=1064, max=314880, avg=108456.06, stdev=53230.57 00:27:14.122 clat percentiles (msec): 00:27:14.122 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 37], 20.00th=[ 67], 00:27:14.122 | 30.00th=[ 79], 40.00th=[ 91], 50.00th=[ 102], 60.00th=[ 113], 00:27:14.122 | 70.00th=[ 129], 80.00th=[ 155], 90.00th=[ 180], 95.00th=[ 199], 00:27:14.122 | 99.00th=[ 234], 99.50th=[ 255], 99.90th=[ 279], 99.95th=[ 292], 00:27:14.122 | 99.99th=[ 317] 00:27:14.122 bw ( KiB/s): min=66048, max=228352, per=9.42%, avg=150823.70, stdev=47007.20, samples=20 00:27:14.122 iops : min= 258, max= 892, avg=589.15, stdev=183.62, samples=20 00:27:14.122 lat (msec) : 2=0.13%, 4=0.47%, 10=0.69%, 20=2.38%, 50=10.19% 00:27:14.122 lat (msec) : 100=35.26%, 250=50.34%, 500=0.52% 00:27:14.122 cpu : usr=0.37%, sys=2.17%, ctx=1195, majf=0, minf=4097 00:27:14.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.122 issued rwts: total=5955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.122 job4: (groupid=0, jobs=1): err= 0: pid=1145618: Mon Jul 15 07:54:03 2024 00:27:14.122 read: IOPS=432, BW=108MiB/s (113MB/s)(1096MiB/10138msec) 00:27:14.122 slat (usec): min=12, max=103074, avg=1875.56, stdev=6343.47 00:27:14.122 clat (usec): min=1377, max=375059, avg=146060.79, stdev=58198.99 00:27:14.122 lat (usec): min=1423, max=389124, avg=147936.35, stdev=59109.55 00:27:14.122 clat percentiles (msec): 00:27:14.122 | 1.00th=[ 14], 5.00th=[ 25], 10.00th=[ 63], 20.00th=[ 110], 00:27:14.122 | 30.00th=[ 124], 40.00th=[ 133], 50.00th=[ 144], 60.00th=[ 159], 00:27:14.122 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 220], 95.00th=[ 236], 00:27:14.122 | 99.00th=[ 268], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 376], 00:27:14.122 | 99.99th=[ 376] 00:27:14.122 bw ( KiB/s): min=69120, max=194048, per=6.90%, avg=110566.40, stdev=33876.20, samples=20 00:27:14.122 iops : min= 270, max= 758, avg=431.90, stdev=132.33, samples=20 00:27:14.122 lat (msec) : 2=0.09%, 4=0.05%, 10=0.18%, 20=3.04%, 50=5.07% 00:27:14.122 lat (msec) : 100=5.71%, 250=83.07%, 500=2.81% 00:27:14.122 cpu : usr=0.35%, sys=1.51%, ctx=913, majf=0, minf=4097 00:27:14.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.122 issued rwts: total=4382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.122 job5: (groupid=0, jobs=1): err= 0: pid=1145619: Mon Jul 15 07:54:03 2024 00:27:14.122 read: IOPS=481, BW=120MiB/s (126MB/s)(1220MiB/10144msec) 00:27:14.122 slat (usec): min=9, max=166420, avg=1232.88, stdev=5816.31 00:27:14.122 clat (usec): min=1462, max=366725, avg=131668.79, stdev=54894.09 00:27:14.122 lat (usec): min=1488, max=366787, avg=132901.67, stdev=55548.08 00:27:14.122 clat percentiles (msec): 00:27:14.122 | 1.00th=[ 21], 5.00th=[ 41], 10.00th=[ 66], 20.00th=[ 91], 00:27:14.122 | 30.00th=[ 105], 40.00th=[ 114], 50.00th=[ 123], 60.00th=[ 136], 00:27:14.122 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 205], 95.00th=[ 228], 00:27:14.122 | 99.00th=[ 271], 99.50th=[ 296], 99.90th=[ 355], 99.95th=[ 355], 00:27:14.122 | 99.99th=[ 368] 00:27:14.122 bw ( KiB/s): min=81408, max=181760, per=7.70%, avg=123340.80, stdev=27366.06, samples=20 00:27:14.122 iops : min= 318, max= 710, avg=481.80, stdev=106.90, samples=20 00:27:14.122 lat (msec) : 2=0.02%, 10=0.16%, 20=0.61%, 50=5.53%, 100=18.60% 00:27:14.122 lat (msec) : 250=73.24%, 500=1.82% 00:27:14.122 cpu : usr=0.30%, sys=1.64%, ctx=1094, majf=0, minf=4097 00:27:14.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.123 issued rwts: total=4881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.123 job6: (groupid=0, jobs=1): err= 0: pid=1145620: Mon Jul 15 07:54:03 2024 00:27:14.123 read: IOPS=539, BW=135MiB/s (141MB/s)(1368MiB/10142msec) 00:27:14.123 slat (usec): min=8, max=76478, avg=1637.96, stdev=5113.36 00:27:14.123 clat (msec): min=9, max=362, avg=116.90, stdev=59.43 00:27:14.123 lat (msec): min=9, max=362, avg=118.54, stdev=60.36 00:27:14.123 clat percentiles (msec): 00:27:14.123 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 53], 00:27:14.123 | 30.00th=[ 77], 40.00th=[ 100], 50.00th=[ 117], 60.00th=[ 131], 00:27:14.123 | 70.00th=[ 146], 80.00th=[ 171], 90.00th=[ 192], 95.00th=[ 218], 00:27:14.123 | 99.00th=[ 259], 99.50th=[ 296], 99.90th=[ 359], 99.95th=[ 363], 00:27:14.123 | 99.99th=[ 363] 00:27:14.123 bw ( KiB/s): min=68608, max=392704, per=8.65%, avg=138444.80, stdev=78632.02, samples=20 00:27:14.123 iops : min= 268, max= 1534, avg=540.80, stdev=307.16, samples=20 00:27:14.123 lat (msec) : 10=0.11%, 20=0.24%, 50=18.94%, 100=20.93%, 250=58.22% 00:27:14.123 lat (msec) : 500=1.57% 00:27:14.123 cpu : usr=0.46%, sys=1.76%, ctx=1090, majf=0, minf=4097 00:27:14.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:14.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.123 issued rwts: total=5471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.123 job7: (groupid=0, jobs=1): err= 0: pid=1145621: Mon Jul 15 07:54:03 2024 00:27:14.123 read: IOPS=642, BW=161MiB/s (168MB/s)(1621MiB/10098msec) 00:27:14.123 slat (usec): min=9, max=142904, avg=1240.18, stdev=4677.10 00:27:14.123 clat (msec): min=3, max=308, avg=98.34, stdev=56.60 00:27:14.123 lat (msec): min=3, max=355, avg=99.58, stdev=57.17 00:27:14.123 clat percentiles (msec): 00:27:14.123 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 46], 00:27:14.123 | 30.00th=[ 52], 40.00th=[ 68], 50.00th=[ 84], 60.00th=[ 108], 00:27:14.123 | 70.00th=[ 123], 80.00th=[ 148], 90.00th=[ 186], 95.00th=[ 207], 00:27:14.123 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 300], 99.95th=[ 309], 00:27:14.123 | 99.99th=[ 309] 00:27:14.123 bw ( KiB/s): min=82944, max=342016, per=10.27%, avg=164403.20, stdev=76460.23, samples=20 00:27:14.123 iops : min= 324, max= 1336, avg=642.20, stdev=298.67, samples=20 00:27:14.123 lat (msec) : 4=0.06%, 10=0.31%, 20=0.96%, 50=26.88%, 100=28.47% 00:27:14.123 lat (msec) : 250=42.76%, 500=0.57% 00:27:14.123 cpu : usr=0.36%, sys=2.28%, ctx=1111, majf=0, minf=4097 00:27:14.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:14.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.123 issued rwts: total=6485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.123 job8: (groupid=0, jobs=1): err= 0: pid=1145622: Mon Jul 15 07:54:03 2024 00:27:14.123 read: IOPS=587, BW=147MiB/s (154MB/s)(1484MiB/10102msec) 00:27:14.123 slat (usec): min=9, max=108857, avg=1602.11, stdev=5404.68 00:27:14.123 clat (usec): min=1793, max=326272, avg=107252.13, stdev=50843.07 00:27:14.123 lat (usec): min=1816, max=339857, avg=108854.25, stdev=51637.50 00:27:14.123 clat percentiles (msec): 00:27:14.123 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 61], 00:27:14.123 | 30.00th=[ 73], 40.00th=[ 88], 50.00th=[ 103], 60.00th=[ 114], 00:27:14.123 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 182], 95.00th=[ 211], 00:27:14.123 | 99.00th=[ 247], 99.50th=[ 268], 99.90th=[ 309], 99.95th=[ 309], 00:27:14.123 | 99.99th=[ 326] 00:27:14.123 bw ( KiB/s): min=70144, max=264704, per=9.39%, avg=150323.20, stdev=55913.19, samples=20 00:27:14.123 iops : min= 274, max= 1034, avg=587.20, stdev=218.41, samples=20 00:27:14.123 lat (msec) : 2=0.02%, 4=0.54%, 10=0.40%, 20=0.05%, 50=8.66% 00:27:14.123 lat (msec) : 100=38.87%, 250=50.68%, 500=0.78% 00:27:14.123 cpu : usr=0.39%, sys=1.88%, ctx=1003, majf=0, minf=4097 00:27:14.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:14.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.123 issued rwts: total=5935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.123 job9: (groupid=0, jobs=1): err= 0: pid=1145623: Mon Jul 15 07:54:03 2024 00:27:14.123 read: IOPS=568, BW=142MiB/s (149MB/s)(1436MiB/10100msec) 00:27:14.123 slat (usec): min=9, max=148764, avg=1360.90, stdev=5851.13 00:27:14.123 clat (msec): min=5, max=378, avg=111.07, stdev=54.47 00:27:14.123 lat (msec): min=5, max=378, avg=112.43, stdev=55.29 00:27:14.123 clat percentiles (msec): 00:27:14.123 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 63], 00:27:14.123 | 30.00th=[ 86], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 115], 00:27:14.123 | 70.00th=[ 128], 80.00th=[ 157], 90.00th=[ 190], 95.00th=[ 213], 00:27:14.123 | 99.00th=[ 253], 99.50th=[ 279], 99.90th=[ 368], 99.95th=[ 372], 00:27:14.123 | 99.99th=[ 380] 00:27:14.123 bw ( KiB/s): min=67584, max=293376, per=9.08%, avg=145459.20, stdev=54526.36, samples=20 00:27:14.123 iops : min= 264, max= 1146, avg=568.20, stdev=212.99, samples=20 00:27:14.123 lat (msec) : 10=0.54%, 20=1.10%, 50=13.73%, 100=27.80%, 250=55.65% 00:27:14.123 lat (msec) : 500=1.18% 00:27:14.123 cpu : usr=0.32%, sys=1.68%, ctx=1119, majf=0, minf=4097 00:27:14.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:14.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.123 issued rwts: total=5745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.123 job10: (groupid=0, jobs=1): err= 0: pid=1145624: Mon Jul 15 07:54:03 2024 00:27:14.123 read: IOPS=655, BW=164MiB/s (172MB/s)(1642MiB/10027msec) 00:27:14.123 slat (usec): min=10, max=162494, avg=1208.19, stdev=5020.01 00:27:14.123 clat (msec): min=4, max=340, avg=96.44, stdev=49.16 00:27:14.123 lat (msec): min=4, max=348, avg=97.65, stdev=49.76 00:27:14.123 clat percentiles (msec): 00:27:14.123 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 38], 20.00th=[ 54], 00:27:14.123 | 30.00th=[ 68], 40.00th=[ 81], 50.00th=[ 94], 60.00th=[ 106], 00:27:14.123 | 70.00th=[ 116], 80.00th=[ 130], 90.00th=[ 159], 95.00th=[ 188], 00:27:14.123 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 313], 00:27:14.123 | 99.99th=[ 342] 00:27:14.123 bw ( KiB/s): min=86016, max=292864, per=10.40%, avg=166537.10, stdev=69183.86, samples=20 00:27:14.123 iops : min= 336, max= 1144, avg=650.50, stdev=270.29, samples=20 00:27:14.123 lat (msec) : 10=0.62%, 20=2.51%, 50=14.86%, 100=37.87%, 250=42.90% 00:27:14.123 lat (msec) : 500=1.23% 00:27:14.123 cpu : usr=0.32%, sys=2.06%, ctx=1181, majf=0, minf=4097 00:27:14.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:14.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.123 issued rwts: total=6568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.123 00:27:14.123 Run status group 0 (all jobs): 00:27:14.123 READ: bw=1564MiB/s (1640MB/s), 108MiB/s-164MiB/s (113MB/s-172MB/s), io=15.5GiB (16.6GB), run=10027-10144msec 00:27:14.123 00:27:14.123 Disk stats (read/write): 00:27:14.123 nvme0n1: ios=12477/0, merge=0/0, ticks=1239465/0, in_queue=1239465, util=96.96% 00:27:14.123 nvme10n1: ios=12972/0, merge=0/0, ticks=1232382/0, in_queue=1232382, util=97.20% 00:27:14.123 nvme1n1: ios=10030/0, merge=0/0, ticks=1236857/0, in_queue=1236857, util=97.50% 00:27:14.123 nvme2n1: ios=11695/0, merge=0/0, ticks=1233659/0, in_queue=1233659, util=97.66% 00:27:14.123 nvme3n1: ios=8595/0, merge=0/0, ticks=1223994/0, in_queue=1223994, util=97.75% 00:27:14.123 nvme4n1: ios=9578/0, merge=0/0, ticks=1233660/0, in_queue=1233660, util=98.12% 00:27:14.123 nvme5n1: ios=10773/0, merge=0/0, ticks=1222161/0, in_queue=1222161, util=98.29% 00:27:14.123 nvme6n1: ios=12754/0, merge=0/0, ticks=1227120/0, in_queue=1227120, util=98.43% 00:27:14.123 nvme7n1: ios=11555/0, merge=0/0, ticks=1224003/0, in_queue=1224003, util=98.89% 00:27:14.123 nvme8n1: ios=11250/0, merge=0/0, ticks=1235342/0, in_queue=1235342, util=99.10% 00:27:14.123 nvme9n1: ios=12797/0, merge=0/0, ticks=1240302/0, in_queue=1240302, util=99.23% 00:27:14.123 07:54:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:14.123 [global] 00:27:14.123 thread=1 00:27:14.123 invalidate=1 00:27:14.123 rw=randwrite 00:27:14.123 time_based=1 00:27:14.123 runtime=10 00:27:14.123 ioengine=libaio 00:27:14.123 direct=1 00:27:14.123 bs=262144 00:27:14.123 iodepth=64 00:27:14.123 norandommap=1 00:27:14.123 numjobs=1 00:27:14.123 00:27:14.123 [job0] 00:27:14.123 filename=/dev/nvme0n1 00:27:14.123 [job1] 00:27:14.123 filename=/dev/nvme10n1 00:27:14.123 [job2] 00:27:14.123 filename=/dev/nvme1n1 00:27:14.123 [job3] 00:27:14.123 filename=/dev/nvme2n1 00:27:14.123 [job4] 00:27:14.123 filename=/dev/nvme3n1 00:27:14.123 [job5] 00:27:14.123 filename=/dev/nvme4n1 00:27:14.123 [job6] 00:27:14.123 filename=/dev/nvme5n1 00:27:14.123 [job7] 00:27:14.123 filename=/dev/nvme6n1 00:27:14.123 [job8] 00:27:14.123 filename=/dev/nvme7n1 00:27:14.123 [job9] 00:27:14.123 filename=/dev/nvme8n1 00:27:14.123 [job10] 00:27:14.123 filename=/dev/nvme9n1 00:27:14.123 Could not set queue depth (nvme0n1) 00:27:14.123 Could not set queue depth (nvme10n1) 00:27:14.123 Could not set queue depth (nvme1n1) 00:27:14.123 Could not set queue depth (nvme2n1) 00:27:14.123 Could not set queue depth (nvme3n1) 00:27:14.123 Could not set queue depth (nvme4n1) 00:27:14.123 Could not set queue depth (nvme5n1) 00:27:14.123 Could not set queue depth (nvme6n1) 00:27:14.123 Could not set queue depth (nvme7n1) 00:27:14.123 Could not set queue depth (nvme8n1) 00:27:14.123 Could not set queue depth (nvme9n1) 00:27:14.123 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.123 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.123 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.123 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.123 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.123 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.123 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.124 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.124 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.124 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.124 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.124 fio-3.35 00:27:14.124 Starting 11 threads 00:27:24.094 00:27:24.094 job0: (groupid=0, jobs=1): err= 0: pid=1146758: Mon Jul 15 07:54:14 2024 00:27:24.094 write: IOPS=362, BW=90.6MiB/s (95.0MB/s)(912MiB/10065msec); 0 zone resets 00:27:24.094 slat (usec): min=21, max=191324, avg=2090.49, stdev=7059.42 00:27:24.094 clat (msec): min=2, max=538, avg=174.43, stdev=97.46 00:27:24.094 lat (msec): min=2, max=538, avg=176.52, stdev=98.76 00:27:24.094 clat percentiles (msec): 00:27:24.094 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 78], 00:27:24.094 | 30.00th=[ 114], 40.00th=[ 144], 50.00th=[ 186], 60.00th=[ 207], 00:27:24.094 | 70.00th=[ 236], 80.00th=[ 264], 90.00th=[ 292], 95.00th=[ 317], 00:27:24.094 | 99.00th=[ 401], 99.50th=[ 456], 99.90th=[ 523], 99.95th=[ 542], 00:27:24.094 | 99.99th=[ 542] 00:27:24.094 bw ( KiB/s): min=48128, max=171008, per=8.09%, avg=91769.50, stdev=34168.46, samples=20 00:27:24.094 iops : min= 188, max= 668, avg=358.45, stdev=133.49, samples=20 00:27:24.094 lat (msec) : 4=0.16%, 10=2.19%, 20=3.73%, 50=7.24%, 100=14.09% 00:27:24.094 lat (msec) : 250=47.59%, 500=24.73%, 750=0.27% 00:27:24.094 cpu : usr=1.09%, sys=1.20%, ctx=1954, majf=0, minf=1 00:27:24.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:24.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.094 issued rwts: total=0,3648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.094 job1: (groupid=0, jobs=1): err= 0: pid=1146770: Mon Jul 15 07:54:14 2024 00:27:24.094 write: IOPS=392, BW=98.1MiB/s (103MB/s)(993MiB/10121msec); 0 zone resets 00:27:24.094 slat (usec): min=18, max=127641, avg=1456.54, stdev=5206.77 00:27:24.094 clat (usec): min=1757, max=369058, avg=161585.48, stdev=92898.23 00:27:24.094 lat (usec): min=1822, max=369087, avg=163042.02, stdev=94123.95 00:27:24.094 clat percentiles (msec): 00:27:24.094 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 64], 00:27:24.094 | 30.00th=[ 91], 40.00th=[ 129], 50.00th=[ 163], 60.00th=[ 207], 00:27:24.094 | 70.00th=[ 226], 80.00th=[ 247], 90.00th=[ 288], 95.00th=[ 305], 00:27:24.094 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 368], 99.95th=[ 368], 00:27:24.094 | 99.99th=[ 368] 00:27:24.094 bw ( KiB/s): min=49152, max=164352, per=8.82%, avg=100003.40, stdev=27509.02, samples=20 00:27:24.094 iops : min= 192, max= 642, avg=390.60, stdev=107.49, samples=20 00:27:24.094 lat (msec) : 2=0.05%, 4=0.28%, 10=1.11%, 20=2.22%, 50=11.89% 00:27:24.094 lat (msec) : 100=17.20%, 250=48.44%, 500=18.82% 00:27:24.094 cpu : usr=1.24%, sys=1.34%, ctx=2771, majf=0, minf=1 00:27:24.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:24.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.094 issued rwts: total=0,3970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.094 job2: (groupid=0, jobs=1): err= 0: pid=1146771: Mon Jul 15 07:54:14 2024 00:27:24.094 write: IOPS=317, BW=79.4MiB/s (83.3MB/s)(807MiB/10162msec); 0 zone resets 00:27:24.094 slat (usec): min=16, max=83721, avg=2412.13, stdev=6264.37 00:27:24.094 clat (usec): min=1526, max=397043, avg=198898.81, stdev=86620.09 00:27:24.094 lat (usec): min=1579, max=397099, avg=201310.94, stdev=87951.17 00:27:24.094 clat percentiles (msec): 00:27:24.094 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 58], 20.00th=[ 117], 00:27:24.094 | 30.00th=[ 176], 40.00th=[ 199], 50.00th=[ 211], 60.00th=[ 232], 00:27:24.094 | 70.00th=[ 247], 80.00th=[ 268], 90.00th=[ 305], 95.00th=[ 330], 00:27:24.094 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 397], 00:27:24.094 | 99.99th=[ 397] 00:27:24.094 bw ( KiB/s): min=47104, max=155648, per=7.14%, avg=81042.85, stdev=27346.98, samples=20 00:27:24.094 iops : min= 184, max= 608, avg=316.55, stdev=106.84, samples=20 00:27:24.094 lat (msec) : 2=0.09%, 4=0.06%, 10=1.80%, 20=1.86%, 50=4.80% 00:27:24.094 lat (msec) : 100=8.58%, 250=54.20%, 500=28.62% 00:27:24.094 cpu : usr=0.93%, sys=1.07%, ctx=1704, majf=0, minf=1 00:27:24.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:24.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.094 issued rwts: total=0,3229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.094 job3: (groupid=0, jobs=1): err= 0: pid=1146772: Mon Jul 15 07:54:14 2024 00:27:24.094 write: IOPS=321, BW=80.5MiB/s (84.4MB/s)(818MiB/10170msec); 0 zone resets 00:27:24.094 slat (usec): min=17, max=88360, avg=2128.53, stdev=6433.81 00:27:24.094 clat (usec): min=1588, max=411822, avg=196628.95, stdev=102272.84 00:27:24.094 lat (usec): min=1623, max=411859, avg=198757.49, stdev=103809.58 00:27:24.094 clat percentiles (msec): 00:27:24.094 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 35], 20.00th=[ 67], 00:27:24.094 | 30.00th=[ 155], 40.00th=[ 203], 50.00th=[ 228], 60.00th=[ 243], 00:27:24.094 | 70.00th=[ 259], 80.00th=[ 288], 90.00th=[ 321], 95.00th=[ 330], 00:27:24.094 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 414], 00:27:24.094 | 99.99th=[ 414] 00:27:24.094 bw ( KiB/s): min=45056, max=162304, per=7.24%, avg=82162.45, stdev=30399.60, samples=20 00:27:24.094 iops : min= 176, max= 634, avg=320.90, stdev=118.78, samples=20 00:27:24.094 lat (msec) : 2=0.09%, 4=0.37%, 10=2.99%, 20=2.81%, 50=8.28% 00:27:24.094 lat (msec) : 100=10.11%, 250=40.33%, 500=35.01% 00:27:24.094 cpu : usr=0.99%, sys=1.03%, ctx=1997, majf=0, minf=1 00:27:24.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:24.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.094 issued rwts: total=0,3273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.094 job4: (groupid=0, jobs=1): err= 0: pid=1146773: Mon Jul 15 07:54:14 2024 00:27:24.094 write: IOPS=533, BW=133MiB/s (140MB/s)(1354MiB/10153msec); 0 zone resets 00:27:24.094 slat (usec): min=20, max=237241, avg=1010.84, stdev=5105.27 00:27:24.094 clat (usec): min=1215, max=619919, avg=118880.90, stdev=84508.63 00:27:24.094 lat (usec): min=1254, max=620014, avg=119891.74, stdev=85121.41 00:27:24.094 clat percentiles (msec): 00:27:24.094 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 52], 00:27:24.094 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 112], 00:27:24.094 | 70.00th=[ 157], 80.00th=[ 203], 90.00th=[ 239], 95.00th=[ 259], 00:27:24.094 | 99.00th=[ 422], 99.50th=[ 456], 99.90th=[ 609], 99.95th=[ 609], 00:27:24.094 | 99.99th=[ 617] 00:27:24.094 bw ( KiB/s): min=61952, max=241152, per=12.08%, avg=136997.60, stdev=57433.20, samples=20 00:27:24.094 iops : min= 242, max= 942, avg=535.10, stdev=224.30, samples=20 00:27:24.094 lat (msec) : 2=0.06%, 4=0.48%, 10=1.83%, 20=3.51%, 50=13.31% 00:27:24.094 lat (msec) : 100=37.39%, 250=36.74%, 500=6.50%, 750=0.18% 00:27:24.094 cpu : usr=1.45%, sys=1.69%, ctx=3459, majf=0, minf=1 00:27:24.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:24.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.094 issued rwts: total=0,5416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.094 job5: (groupid=0, jobs=1): err= 0: pid=1146774: Mon Jul 15 07:54:14 2024 00:27:24.095 write: IOPS=350, BW=87.7MiB/s (92.0MB/s)(890MiB/10146msec); 0 zone resets 00:27:24.095 slat (usec): min=17, max=66147, avg=1477.97, stdev=5052.39 00:27:24.095 clat (msec): min=2, max=425, avg=180.88, stdev=94.38 00:27:24.095 lat (msec): min=2, max=428, avg=182.35, stdev=95.52 00:27:24.095 clat percentiles (msec): 00:27:24.095 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 37], 20.00th=[ 75], 00:27:24.095 | 30.00th=[ 126], 40.00th=[ 167], 50.00th=[ 197], 60.00th=[ 226], 00:27:24.095 | 70.00th=[ 249], 80.00th=[ 271], 90.00th=[ 292], 95.00th=[ 317], 00:27:24.095 | 99.00th=[ 351], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 422], 00:27:24.095 | 99.99th=[ 426] 00:27:24.095 bw ( KiB/s): min=61440, max=163328, per=7.89%, avg=89486.45, stdev=30700.41, samples=20 00:27:24.095 iops : min= 240, max= 638, avg=349.50, stdev=119.96, samples=20 00:27:24.095 lat (msec) : 4=0.22%, 10=1.49%, 20=3.20%, 50=9.02%, 100=9.78% 00:27:24.095 lat (msec) : 250=47.06%, 500=29.22% 00:27:24.095 cpu : usr=0.99%, sys=1.09%, ctx=2520, majf=0, minf=1 00:27:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:27:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.095 issued rwts: total=0,3559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.095 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.095 job6: (groupid=0, jobs=1): err= 0: pid=1146775: Mon Jul 15 07:54:14 2024 00:27:24.095 write: IOPS=390, BW=97.7MiB/s (102MB/s)(993MiB/10163msec); 0 zone resets 00:27:24.095 slat (usec): min=23, max=95144, avg=1681.98, stdev=4725.97 00:27:24.095 clat (msec): min=2, max=344, avg=161.96, stdev=67.81 00:27:24.095 lat (msec): min=2, max=344, avg=163.64, stdev=68.65 00:27:24.095 clat percentiles (msec): 00:27:24.095 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 77], 20.00th=[ 109], 00:27:24.095 | 30.00th=[ 117], 40.00th=[ 136], 50.00th=[ 161], 60.00th=[ 184], 00:27:24.095 | 70.00th=[ 203], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 268], 00:27:24.095 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 338], 99.95th=[ 347], 00:27:24.095 | 99.99th=[ 347] 00:27:24.095 bw ( KiB/s): min=63488, max=164864, per=8.82%, avg=100058.85, stdev=27169.68, samples=20 00:27:24.095 iops : min= 248, max= 644, avg=390.80, stdev=106.15, samples=20 00:27:24.095 lat (msec) : 4=0.08%, 10=0.38%, 20=0.98%, 50=4.18%, 100=11.28% 00:27:24.095 lat (msec) : 250=71.93%, 500=11.18% 00:27:24.095 cpu : usr=1.32%, sys=1.33%, ctx=2263, majf=0, minf=1 00:27:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.095 issued rwts: total=0,3972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.095 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.095 job7: (groupid=0, jobs=1): err= 0: pid=1146776: Mon Jul 15 07:54:14 2024 00:27:24.095 write: IOPS=409, BW=102MiB/s (107MB/s)(1039MiB/10154msec); 0 zone resets 00:27:24.095 slat (usec): min=15, max=111109, avg=1752.13, stdev=5368.08 00:27:24.095 clat (msec): min=2, max=437, avg=154.60, stdev=99.59 00:27:24.095 lat (msec): min=2, max=441, avg=156.35, stdev=100.86 00:27:24.095 clat percentiles (msec): 00:27:24.095 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 32], 20.00th=[ 58], 00:27:24.095 | 30.00th=[ 80], 40.00th=[ 114], 50.00th=[ 138], 60.00th=[ 176], 00:27:24.095 | 70.00th=[ 218], 80.00th=[ 247], 90.00th=[ 296], 95.00th=[ 330], 00:27:24.095 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:27:24.095 | 99.99th=[ 439] 00:27:24.095 bw ( KiB/s): min=49152, max=200192, per=9.23%, avg=104692.30, stdev=44895.72, samples=20 00:27:24.095 iops : min= 192, max= 782, avg=408.90, stdev=175.43, samples=20 00:27:24.095 lat (msec) : 4=0.14%, 10=1.59%, 20=3.66%, 50=11.84%, 100=17.91% 00:27:24.095 lat (msec) : 250=46.05%, 500=18.80% 00:27:24.095 cpu : usr=1.27%, sys=1.32%, ctx=2388, majf=0, minf=1 00:27:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.095 issued rwts: total=0,4154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.095 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.095 job8: (groupid=0, jobs=1): err= 0: pid=1146777: Mon Jul 15 07:54:14 2024 00:27:24.095 write: IOPS=377, BW=94.3MiB/s (98.9MB/s)(958MiB/10159msec); 0 zone resets 00:27:24.095 slat (usec): min=17, max=111257, avg=1870.06, stdev=5285.23 00:27:24.095 clat (usec): min=1294, max=430513, avg=167724.46, stdev=92076.95 00:27:24.095 lat (usec): min=1318, max=436230, avg=169594.51, stdev=93266.87 00:27:24.095 clat percentiles (msec): 00:27:24.095 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 42], 20.00th=[ 85], 00:27:24.095 | 30.00th=[ 114], 40.00th=[ 131], 50.00th=[ 165], 60.00th=[ 194], 00:27:24.095 | 70.00th=[ 224], 80.00th=[ 249], 90.00th=[ 284], 95.00th=[ 330], 00:27:24.095 | 99.00th=[ 388], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 430], 00:27:24.095 | 99.99th=[ 430] 00:27:24.095 bw ( KiB/s): min=57344, max=161792, per=8.50%, avg=96439.35, stdev=31874.52, samples=20 00:27:24.095 iops : min= 224, max= 632, avg=376.65, stdev=124.47, samples=20 00:27:24.095 lat (msec) : 2=0.10%, 4=0.29%, 10=2.06%, 20=1.98%, 50=7.54% 00:27:24.095 lat (msec) : 100=11.09%, 250=57.45%, 500=19.47% 00:27:24.095 cpu : usr=1.20%, sys=1.25%, ctx=1991, majf=0, minf=1 00:27:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.095 issued rwts: total=0,3831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.095 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.095 job9: (groupid=0, jobs=1): err= 0: pid=1146778: Mon Jul 15 07:54:14 2024 00:27:24.095 write: IOPS=571, BW=143MiB/s (150MB/s)(1452MiB/10161msec); 0 zone resets 00:27:24.095 slat (usec): min=16, max=207955, avg=1323.73, stdev=5305.57 00:27:24.095 clat (usec): min=1352, max=464613, avg=110571.80, stdev=83797.14 00:27:24.095 lat (usec): min=1392, max=464656, avg=111895.53, stdev=84799.23 00:27:24.095 clat percentiles (msec): 00:27:24.095 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 51], 00:27:24.095 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 83], 60.00th=[ 104], 00:27:24.095 | 70.00th=[ 134], 80.00th=[ 178], 90.00th=[ 247], 95.00th=[ 279], 00:27:24.095 | 99.00th=[ 334], 99.50th=[ 414], 99.90th=[ 443], 99.95th=[ 451], 00:27:24.095 | 99.99th=[ 464] 00:27:24.095 bw ( KiB/s): min=64641, max=307608, per=12.96%, avg=147038.70, stdev=77649.81, samples=20 00:27:24.095 iops : min= 252, max= 1201, avg=574.30, stdev=303.29, samples=20 00:27:24.095 lat (msec) : 2=0.07%, 4=0.24%, 10=2.48%, 20=5.29%, 50=11.78% 00:27:24.095 lat (msec) : 100=39.02%, 250=31.63%, 500=9.50% 00:27:24.095 cpu : usr=1.59%, sys=1.85%, ctx=2872, majf=0, minf=1 00:27:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.095 issued rwts: total=0,5808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.095 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.095 job10: (groupid=0, jobs=1): err= 0: pid=1146779: Mon Jul 15 07:54:14 2024 00:27:24.095 write: IOPS=414, BW=104MiB/s (109MB/s)(1051MiB/10144msec); 0 zone resets 00:27:24.095 slat (usec): min=15, max=70689, avg=1474.73, stdev=4745.32 00:27:24.095 clat (usec): min=1368, max=373464, avg=152940.68, stdev=100719.10 00:27:24.095 lat (usec): min=1396, max=373507, avg=154415.40, stdev=102008.54 00:27:24.095 clat percentiles (msec): 00:27:24.095 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 53], 00:27:24.095 | 30.00th=[ 70], 40.00th=[ 104], 50.00th=[ 133], 60.00th=[ 203], 00:27:24.095 | 70.00th=[ 232], 80.00th=[ 257], 90.00th=[ 292], 95.00th=[ 309], 00:27:24.095 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 372], 99.95th=[ 372], 00:27:24.095 | 99.99th=[ 376] 00:27:24.095 bw ( KiB/s): min=51200, max=229376, per=9.34%, avg=105940.45, stdev=52402.84, samples=20 00:27:24.095 iops : min= 200, max= 896, avg=413.80, stdev=204.73, samples=20 00:27:24.095 lat (msec) : 2=0.24%, 4=0.76%, 10=2.69%, 20=4.93%, 50=10.19% 00:27:24.095 lat (msec) : 100=19.68%, 250=39.34%, 500=22.18% 00:27:24.095 cpu : usr=1.29%, sys=1.39%, ctx=2683, majf=0, minf=1 00:27:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.095 issued rwts: total=0,4202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.095 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.095 00:27:24.095 Run status group 0 (all jobs): 00:27:24.095 WRITE: bw=1108MiB/s (1162MB/s), 79.4MiB/s-143MiB/s (83.3MB/s-150MB/s), io=11.0GiB (11.8GB), run=10065-10170msec 00:27:24.095 00:27:24.095 Disk stats (read/write): 00:27:24.095 nvme0n1: ios=48/7041, merge=0/0, ticks=4254/1190627, in_queue=1194881, util=99.69% 00:27:24.095 nvme10n1: ios=51/7662, merge=0/0, ticks=660/1218486, in_queue=1219146, util=99.91% 00:27:24.095 nvme1n1: ios=45/6445, merge=0/0, ticks=3748/1240364, in_queue=1244112, util=99.86% 00:27:24.095 nvme2n1: ios=15/6528, merge=0/0, ticks=105/1244921, in_queue=1245026, util=97.84% 00:27:24.095 nvme3n1: ios=50/10646, merge=0/0, ticks=5105/1177452, in_queue=1182557, util=99.91% 00:27:24.095 nvme4n1: ios=0/6893, merge=0/0, ticks=0/1219344, in_queue=1219344, util=98.04% 00:27:24.095 nvme5n1: ios=43/7939, merge=0/0, ticks=1718/1244206, in_queue=1245924, util=99.93% 00:27:24.096 nvme6n1: ios=0/8120, merge=0/0, ticks=0/1216249, in_queue=1216249, util=98.33% 00:27:24.096 nvme7n1: ios=40/7489, merge=0/0, ticks=1831/1211538, in_queue=1213369, util=99.93% 00:27:24.096 nvme8n1: ios=46/11614, merge=0/0, ticks=1461/1208544, in_queue=1210005, util=99.95% 00:27:24.096 nvme9n1: ios=0/8212, merge=0/0, ticks=0/1214994, in_queue=1214994, util=99.09% 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:24.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.096 07:54:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:24.096 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.096 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:24.355 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.355 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:24.613 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:24.613 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:24.613 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.613 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.613 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:24.613 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.613 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.871 07:54:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:24.871 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:24.871 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:24.871 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.871 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.871 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:24.871 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.871 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.130 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:25.390 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.390 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:25.677 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:25.677 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:25.677 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.936 07:54:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:26.194 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.194 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:26.453 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:26.453 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.453 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.453 rmmod nvme_tcp 00:27:26.712 rmmod nvme_fabrics 00:27:26.712 rmmod nvme_keyring 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1141175 ']' 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1141175 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1141175 ']' 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1141175 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1141175 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1141175' 00:27:26.712 killing process with pid 1141175 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1141175 00:27:26.712 07:54:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1141175 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.003 07:54:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.904 07:54:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.904 00:27:31.904 real 1m5.271s 00:27:31.904 user 3m39.710s 00:27:31.904 sys 0m22.857s 00:27:31.904 07:54:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.904 07:54:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.904 ************************************ 00:27:31.904 END TEST nvmf_multiconnection 00:27:31.904 ************************************ 00:27:31.904 07:54:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:31.904 07:54:22 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:31.904 07:54:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:31.904 07:54:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.904 07:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.904 ************************************ 00:27:31.904 START TEST nvmf_initiator_timeout 00:27:31.904 ************************************ 00:27:31.904 07:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:31.904 * Looking for test storage... 00:27:31.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.904 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.905 07:54:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.805 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.806 07:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.806 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.806 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.806 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.806 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:27:34.064 00:27:34.064 --- 10.0.0.2 ping statistics --- 00:27:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.064 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:34.064 00:27:34.064 --- 10.0.0.1 ping statistics --- 00:27:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.064 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1151023 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1151023 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1151023 ']' 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.064 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.065 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.065 07:54:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.065 [2024-07-15 07:54:25.195482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:34.065 [2024-07-15 07:54:25.195627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.065 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.322 [2024-07-15 07:54:25.328324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.579 [2024-07-15 07:54:25.581043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.579 [2024-07-15 07:54:25.581113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.579 [2024-07-15 07:54:25.581156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.579 [2024-07-15 07:54:25.581177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.579 [2024-07-15 07:54:25.581208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.579 [2024-07-15 07:54:25.581323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.579 [2024-07-15 07:54:25.581393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.579 [2024-07-15 07:54:25.581473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.579 [2024-07-15 07:54:25.581482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 Malloc0 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 Delay0 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 [2024-07-15 07:54:26.225016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.145 [2024-07-15 07:54:26.254381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.145 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:35.715 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:35.715 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:35.715 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:35.715 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:35.715 07:54:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1151459 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:38.249 07:54:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:38.249 [global] 00:27:38.249 thread=1 00:27:38.249 invalidate=1 00:27:38.249 rw=write 00:27:38.249 time_based=1 00:27:38.249 runtime=60 00:27:38.249 ioengine=libaio 00:27:38.249 direct=1 00:27:38.249 bs=4096 00:27:38.249 iodepth=1 00:27:38.249 norandommap=0 00:27:38.249 numjobs=1 00:27:38.249 00:27:38.249 verify_dump=1 00:27:38.249 verify_backlog=512 00:27:38.249 verify_state_save=0 00:27:38.249 do_verify=1 00:27:38.249 verify=crc32c-intel 00:27:38.249 [job0] 00:27:38.249 filename=/dev/nvme0n1 00:27:38.249 Could not set queue depth (nvme0n1) 00:27:38.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:38.249 fio-3.35 00:27:38.249 Starting 1 thread 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.788 true 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.788 true 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.788 true 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.788 true 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.788 07:54:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:44.077 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:44.078 true 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:44.078 true 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:44.078 true 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:44.078 true 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:44.078 07:54:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1151459 00:28:40.362 00:28:40.362 job0: (groupid=0, jobs=1): err= 0: pid=1151531: Mon Jul 15 07:55:29 2024 00:28:40.362 read: IOPS=48, BW=194KiB/s (198kB/s)(11.3MiB/60013msec) 00:28:40.362 slat (usec): min=5, max=9618, avg=22.42, stdev=225.94 00:28:40.362 clat (usec): min=373, max=40948k, avg=20235.88, stdev=759880.04 00:28:40.362 lat (usec): min=380, max=40948k, avg=20258.31, stdev=759880.32 00:28:40.362 clat percentiles (usec): 00:28:40.362 | 1.00th=[ 404], 5.00th=[ 420], 10.00th=[ 429], 00:28:40.362 | 20.00th=[ 449], 30.00th=[ 474], 40.00th=[ 490], 00:28:40.362 | 50.00th=[ 498], 60.00th=[ 510], 70.00th=[ 523], 00:28:40.362 | 80.00th=[ 545], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:40.362 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:28:40.362 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:40.362 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60013msec); 0 zone resets 00:28:40.362 slat (usec): min=7, max=27699, avg=27.90, stdev=499.58 00:28:40.362 clat (usec): min=237, max=546, avg=346.74, stdev=59.80 00:28:40.362 lat (usec): min=245, max=28026, avg=374.64, stdev=503.57 00:28:40.362 clat percentiles (usec): 00:28:40.362 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 289], 00:28:40.362 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 343], 60.00th=[ 367], 00:28:40.362 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 445], 00:28:40.362 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 537], 00:28:40.362 | 99.99th=[ 545] 00:28:40.362 bw ( KiB/s): min= 2216, max= 5040, per=100.00%, avg=4096.00, stdev=991.70, samples=6 00:28:40.362 iops : min= 554, max= 1260, avg=1024.00, stdev=247.93, samples=6 00:28:40.362 lat (usec) : 250=0.75%, 500=75.08%, 750=17.35%, 1000=0.02% 00:28:40.362 lat (msec) : 50=6.78%, >=2000=0.02% 00:28:40.362 cpu : usr=0.13%, sys=0.24%, ctx=5981, majf=0, minf=2 00:28:40.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:40.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.362 issued rwts: total=2904,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:40.362 00:28:40.362 Run status group 0 (all jobs): 00:28:40.362 READ: bw=194KiB/s (198kB/s), 194KiB/s-194KiB/s (198kB/s-198kB/s), io=11.3MiB (11.9MB), run=60013-60013msec 00:28:40.362 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60013-60013msec 00:28:40.362 00:28:40.362 Disk stats (read/write): 00:28:40.362 nvme0n1: ios=2953/3072, merge=0/0, ticks=19000/1019, in_queue=20019, util=99.96% 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:40.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:40.362 nvmf hotplug test: fio successful as expected 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.362 rmmod nvme_tcp 00:28:40.362 rmmod nvme_fabrics 00:28:40.362 rmmod nvme_keyring 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1151023 ']' 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1151023 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1151023 ']' 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1151023 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151023 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151023' 00:28:40.362 killing process with pid 1151023 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1151023 00:28:40.362 07:55:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1151023 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.362 07:55:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.741 07:55:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:41.741 00:28:41.741 real 1m9.919s 00:28:41.741 user 4m14.806s 00:28:41.741 sys 0m6.915s 00:28:41.741 07:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.741 07:55:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:41.741 ************************************ 00:28:41.741 END TEST nvmf_initiator_timeout 00:28:41.741 ************************************ 00:28:41.741 07:55:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:41.741 07:55:32 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:41.741 07:55:32 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:41.741 07:55:32 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:41.741 07:55:32 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:41.741 07:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:43.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:43.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:43.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:43.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:43.669 07:55:34 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:43.669 07:55:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:43.669 07:55:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.669 07:55:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.669 ************************************ 00:28:43.669 START TEST nvmf_perf_adq 00:28:43.669 ************************************ 00:28:43.669 07:55:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:43.927 * Looking for test storage... 00:28:43.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.927 07:55:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.826 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.826 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:45.826 07:55:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:46.394 07:55:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:48.930 07:55:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:54.202 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.203 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.203 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.203 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.203 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:54.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:28:54.203 00:28:54.203 --- 10.0.0.2 ping statistics --- 00:28:54.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.203 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:28:54.203 00:28:54.203 --- 10.0.0.1 ping statistics --- 00:28:54.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.203 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1163167 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1163167 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1163167 ']' 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:54.203 07:55:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.203 [2024-07-15 07:55:44.817492] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:54.203 [2024-07-15 07:55:44.817638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.203 [2024-07-15 07:55:44.955164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.203 [2024-07-15 07:55:45.221508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.203 [2024-07-15 07:55:45.221581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.203 [2024-07-15 07:55:45.221609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.203 [2024-07-15 07:55:45.221631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.203 [2024-07-15 07:55:45.221652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.203 [2024-07-15 07:55:45.221788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.203 [2024-07-15 07:55:45.221861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.203 [2024-07-15 07:55:45.221926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.203 [2024-07-15 07:55:45.221936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.769 07:55:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.029 [2024-07-15 07:55:46.148963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.029 Malloc1 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.029 [2024-07-15 07:55:46.253427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.029 07:55:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.287 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1163436 00:28:55.287 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:55.287 07:55:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:55.287 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:57.227 "tick_rate": 2700000000, 00:28:57.227 "poll_groups": [ 00:28:57.227 { 00:28:57.227 "name": "nvmf_tgt_poll_group_000", 00:28:57.227 "admin_qpairs": 1, 00:28:57.227 "io_qpairs": 1, 00:28:57.227 "current_admin_qpairs": 1, 00:28:57.227 "current_io_qpairs": 1, 00:28:57.227 "pending_bdev_io": 0, 00:28:57.227 "completed_nvme_io": 17272, 00:28:57.227 "transports": [ 00:28:57.227 { 00:28:57.227 "trtype": "TCP" 00:28:57.227 } 00:28:57.227 ] 00:28:57.227 }, 00:28:57.227 { 00:28:57.227 "name": "nvmf_tgt_poll_group_001", 00:28:57.227 "admin_qpairs": 0, 00:28:57.227 "io_qpairs": 1, 00:28:57.227 "current_admin_qpairs": 0, 00:28:57.227 "current_io_qpairs": 1, 00:28:57.227 "pending_bdev_io": 0, 00:28:57.227 "completed_nvme_io": 17298, 00:28:57.227 "transports": [ 00:28:57.227 { 00:28:57.227 "trtype": "TCP" 00:28:57.227 } 00:28:57.227 ] 00:28:57.227 }, 00:28:57.227 { 00:28:57.227 "name": "nvmf_tgt_poll_group_002", 00:28:57.227 "admin_qpairs": 0, 00:28:57.227 "io_qpairs": 1, 00:28:57.227 "current_admin_qpairs": 0, 00:28:57.227 "current_io_qpairs": 1, 00:28:57.227 "pending_bdev_io": 0, 00:28:57.227 "completed_nvme_io": 17217, 00:28:57.227 "transports": [ 00:28:57.227 { 00:28:57.227 "trtype": "TCP" 00:28:57.227 } 00:28:57.227 ] 00:28:57.227 }, 00:28:57.227 { 00:28:57.227 "name": "nvmf_tgt_poll_group_003", 00:28:57.227 "admin_qpairs": 0, 00:28:57.227 "io_qpairs": 1, 00:28:57.227 "current_admin_qpairs": 0, 00:28:57.227 "current_io_qpairs": 1, 00:28:57.227 "pending_bdev_io": 0, 00:28:57.227 "completed_nvme_io": 16786, 00:28:57.227 "transports": [ 00:28:57.227 { 00:28:57.227 "trtype": "TCP" 00:28:57.227 } 00:28:57.227 ] 00:28:57.227 } 00:28:57.227 ] 00:28:57.227 }' 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:57.227 07:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1163436 00:29:05.339 Initializing NVMe Controllers 00:29:05.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:05.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:05.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:05.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:05.339 Initialization complete. Launching workers. 00:29:05.339 ======================================================== 00:29:05.339 Latency(us) 00:29:05.339 Device Information : IOPS MiB/s Average min max 00:29:05.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9425.24 36.82 6792.10 3424.11 10672.23 00:29:05.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9446.23 36.90 6776.29 6118.41 9319.85 00:29:05.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9373.24 36.61 6827.06 3458.71 9401.47 00:29:05.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9070.24 35.43 7056.25 3504.44 10488.81 00:29:05.340 ======================================================== 00:29:05.340 Total : 37314.95 145.76 6861.09 3424.11 10672.23 00:29:05.340 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:05.340 rmmod nvme_tcp 00:29:05.340 rmmod nvme_fabrics 00:29:05.340 rmmod nvme_keyring 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1163167 ']' 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1163167 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1163167 ']' 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1163167 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.340 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163167 00:29:05.597 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:05.598 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:05.598 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163167' 00:29:05.598 killing process with pid 1163167 00:29:05.598 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1163167 00:29:05.598 07:55:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1163167 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.978 07:55:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.885 07:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.885 07:56:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:08.885 07:56:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:09.450 07:56:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:11.981 07:56:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:17.259 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:17.259 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:17.259 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:17.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:17.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:17.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:17.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:29:17.260 00:29:17.260 --- 10.0.0.2 ping statistics --- 00:29:17.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.260 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:17.260 00:29:17.260 --- 10.0.0.1 ping statistics --- 00:29:17.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.260 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:17.260 net.core.busy_poll = 1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:17.260 net.core.busy_read = 1 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:17.260 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:17.261 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:17.261 07:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1166188 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1166188 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1166188 ']' 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.261 07:56:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.261 [2024-07-15 07:56:08.115035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:17.261 [2024-07-15 07:56:08.115176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.261 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.261 [2024-07-15 07:56:08.247294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.519 [2024-07-15 07:56:08.503769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.519 [2024-07-15 07:56:08.503841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.519 [2024-07-15 07:56:08.503868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.519 [2024-07-15 07:56:08.503899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.519 [2024-07-15 07:56:08.503922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.519 [2024-07-15 07:56:08.504022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.519 [2024-07-15 07:56:08.504093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.519 [2024-07-15 07:56:08.504134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.519 [2024-07-15 07:56:08.504144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.138 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.396 [2024-07-15 07:56:09.459041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.396 Malloc1 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:18.396 [2024-07-15 07:56:09.562329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1166348 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:18.396 07:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:18.655 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:20.553 "tick_rate": 2700000000, 00:29:20.553 "poll_groups": [ 00:29:20.553 { 00:29:20.553 "name": "nvmf_tgt_poll_group_000", 00:29:20.553 "admin_qpairs": 1, 00:29:20.553 "io_qpairs": 1, 00:29:20.553 "current_admin_qpairs": 1, 00:29:20.553 "current_io_qpairs": 1, 00:29:20.553 "pending_bdev_io": 0, 00:29:20.553 "completed_nvme_io": 18674, 00:29:20.553 "transports": [ 00:29:20.553 { 00:29:20.553 "trtype": "TCP" 00:29:20.553 } 00:29:20.553 ] 00:29:20.553 }, 00:29:20.553 { 00:29:20.553 "name": "nvmf_tgt_poll_group_001", 00:29:20.553 "admin_qpairs": 0, 00:29:20.553 "io_qpairs": 3, 00:29:20.553 "current_admin_qpairs": 0, 00:29:20.553 "current_io_qpairs": 3, 00:29:20.553 "pending_bdev_io": 0, 00:29:20.553 "completed_nvme_io": 19262, 00:29:20.553 "transports": [ 00:29:20.553 { 00:29:20.553 "trtype": "TCP" 00:29:20.553 } 00:29:20.553 ] 00:29:20.553 }, 00:29:20.553 { 00:29:20.553 "name": "nvmf_tgt_poll_group_002", 00:29:20.553 "admin_qpairs": 0, 00:29:20.553 "io_qpairs": 0, 00:29:20.553 "current_admin_qpairs": 0, 00:29:20.553 "current_io_qpairs": 0, 00:29:20.553 "pending_bdev_io": 0, 00:29:20.553 "completed_nvme_io": 0, 00:29:20.553 "transports": [ 00:29:20.553 { 00:29:20.553 "trtype": "TCP" 00:29:20.553 } 00:29:20.553 ] 00:29:20.553 }, 00:29:20.553 { 00:29:20.553 "name": "nvmf_tgt_poll_group_003", 00:29:20.553 "admin_qpairs": 0, 00:29:20.553 "io_qpairs": 0, 00:29:20.553 "current_admin_qpairs": 0, 00:29:20.553 "current_io_qpairs": 0, 00:29:20.553 "pending_bdev_io": 0, 00:29:20.553 "completed_nvme_io": 0, 00:29:20.553 "transports": [ 00:29:20.553 { 00:29:20.553 "trtype": "TCP" 00:29:20.553 } 00:29:20.553 ] 00:29:20.553 } 00:29:20.553 ] 00:29:20.553 }' 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:20.553 07:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1166348 00:29:28.721 Initializing NVMe Controllers 00:29:28.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:28.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:28.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:28.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:28.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:28.721 Initialization complete. Launching workers. 00:29:28.721 ======================================================== 00:29:28.721 Latency(us) 00:29:28.721 Device Information : IOPS MiB/s Average min max 00:29:28.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3440.10 13.44 18665.34 2663.80 69720.23 00:29:28.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3559.10 13.90 17984.65 2526.38 67789.33 00:29:28.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10276.70 40.14 6228.71 2080.93 9133.60 00:29:28.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3470.70 13.56 18445.02 2524.94 68748.68 00:29:28.721 ======================================================== 00:29:28.721 Total : 20746.60 81.04 12351.30 2080.93 69720.23 00:29:28.721 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.721 rmmod nvme_tcp 00:29:28.721 rmmod nvme_fabrics 00:29:28.721 rmmod nvme_keyring 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1166188 ']' 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1166188 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1166188 ']' 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1166188 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166188 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166188' 00:29:28.721 killing process with pid 1166188 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1166188 00:29:28.721 07:56:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1166188 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.623 07:56:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.528 07:56:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.528 07:56:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:32.528 00:29:32.528 real 0m48.502s 00:29:32.528 user 2m50.963s 00:29:32.528 sys 0m10.673s 00:29:32.528 07:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.528 07:56:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:32.528 ************************************ 00:29:32.528 END TEST nvmf_perf_adq 00:29:32.528 ************************************ 00:29:32.528 07:56:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:32.528 07:56:23 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:32.528 07:56:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:32.528 07:56:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.528 07:56:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.528 ************************************ 00:29:32.528 START TEST nvmf_shutdown 00:29:32.528 ************************************ 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:32.528 * Looking for test storage... 00:29:32.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:32.528 ************************************ 00:29:32.528 START TEST nvmf_shutdown_tc1 00:29:32.528 ************************************ 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:32.528 07:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.431 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.432 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:34.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:29:34.432 00:29:34.432 --- 10.0.0.2 ping statistics --- 00:29:34.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.432 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:34.432 00:29:34.432 --- 10.0.0.1 ping statistics --- 00:29:34.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.432 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1169635 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1169635 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1169635 ']' 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.432 07:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 [2024-07-15 07:56:25.746438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:34.691 [2024-07-15 07:56:25.746577] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.691 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.691 [2024-07-15 07:56:25.883398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.951 [2024-07-15 07:56:26.140483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.951 [2024-07-15 07:56:26.140561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.951 [2024-07-15 07:56:26.140589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.951 [2024-07-15 07:56:26.140610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.951 [2024-07-15 07:56:26.140631] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.951 [2024-07-15 07:56:26.140753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.951 [2024-07-15 07:56:26.140979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.951 [2024-07-15 07:56:26.141021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.951 [2024-07-15 07:56:26.141031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.518 [2024-07-15 07:56:26.688066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.518 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.519 07:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.777 Malloc1 00:29:35.777 [2024-07-15 07:56:26.814906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.777 Malloc2 00:29:35.777 Malloc3 00:29:36.035 Malloc4 00:29:36.035 Malloc5 00:29:36.294 Malloc6 00:29:36.294 Malloc7 00:29:36.294 Malloc8 00:29:36.553 Malloc9 00:29:36.553 Malloc10 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1169942 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1169942 /var/tmp/bdevperf.sock 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1169942 ']' 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.553 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.553 { 00:29:36.553 "params": { 00:29:36.553 "name": "Nvme$subsystem", 00:29:36.553 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.554 { 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme$subsystem", 00:29:36.554 "trtype": "$TEST_TRANSPORT", 00:29:36.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "$NVMF_PORT", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.554 "hdgst": ${hdgst:-false}, 00:29:36.554 "ddgst": ${ddgst:-false} 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 } 00:29:36.554 EOF 00:29:36.554 )") 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:36.554 07:56:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme1", 00:29:36.554 "trtype": "tcp", 00:29:36.554 "traddr": "10.0.0.2", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "4420", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:36.554 "hdgst": false, 00:29:36.554 "ddgst": false 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 },{ 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme2", 00:29:36.554 "trtype": "tcp", 00:29:36.554 "traddr": "10.0.0.2", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "4420", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:36.554 "hdgst": false, 00:29:36.554 "ddgst": false 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 },{ 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme3", 00:29:36.554 "trtype": "tcp", 00:29:36.554 "traddr": "10.0.0.2", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "4420", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:36.554 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:36.554 "hdgst": false, 00:29:36.554 "ddgst": false 00:29:36.554 }, 00:29:36.554 "method": "bdev_nvme_attach_controller" 00:29:36.554 },{ 00:29:36.554 "params": { 00:29:36.554 "name": "Nvme4", 00:29:36.554 "trtype": "tcp", 00:29:36.554 "traddr": "10.0.0.2", 00:29:36.554 "adrfam": "ipv4", 00:29:36.554 "trsvcid": "4420", 00:29:36.554 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 },{ 00:29:36.555 "params": { 00:29:36.555 "name": "Nvme5", 00:29:36.555 "trtype": "tcp", 00:29:36.555 "traddr": "10.0.0.2", 00:29:36.555 "adrfam": "ipv4", 00:29:36.555 "trsvcid": "4420", 00:29:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 },{ 00:29:36.555 "params": { 00:29:36.555 "name": "Nvme6", 00:29:36.555 "trtype": "tcp", 00:29:36.555 "traddr": "10.0.0.2", 00:29:36.555 "adrfam": "ipv4", 00:29:36.555 "trsvcid": "4420", 00:29:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 },{ 00:29:36.555 "params": { 00:29:36.555 "name": "Nvme7", 00:29:36.555 "trtype": "tcp", 00:29:36.555 "traddr": "10.0.0.2", 00:29:36.555 "adrfam": "ipv4", 00:29:36.555 "trsvcid": "4420", 00:29:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 },{ 00:29:36.555 "params": { 00:29:36.555 "name": "Nvme8", 00:29:36.555 "trtype": "tcp", 00:29:36.555 "traddr": "10.0.0.2", 00:29:36.555 "adrfam": "ipv4", 00:29:36.555 "trsvcid": "4420", 00:29:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 },{ 00:29:36.555 "params": { 00:29:36.555 "name": "Nvme9", 00:29:36.555 "trtype": "tcp", 00:29:36.555 "traddr": "10.0.0.2", 00:29:36.555 "adrfam": "ipv4", 00:29:36.555 "trsvcid": "4420", 00:29:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 },{ 00:29:36.555 "params": { 00:29:36.555 "name": "Nvme10", 00:29:36.555 "trtype": "tcp", 00:29:36.555 "traddr": "10.0.0.2", 00:29:36.555 "adrfam": "ipv4", 00:29:36.555 "trsvcid": "4420", 00:29:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:36.555 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:36.555 "hdgst": false, 00:29:36.555 "ddgst": false 00:29:36.555 }, 00:29:36.555 "method": "bdev_nvme_attach_controller" 00:29:36.555 }' 00:29:36.816 [2024-07-15 07:56:27.820677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:36.816 [2024-07-15 07:56:27.820845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:36.816 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.816 [2024-07-15 07:56:27.952176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.075 [2024-07-15 07:56:28.192212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1169942 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:39.609 07:56:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:40.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1169942 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1169635 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.547 { 00:29:40.547 "params": { 00:29:40.547 "name": "Nvme$subsystem", 00:29:40.547 "trtype": "$TEST_TRANSPORT", 00:29:40.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.547 "adrfam": "ipv4", 00:29:40.547 "trsvcid": "$NVMF_PORT", 00:29:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.547 "hdgst": ${hdgst:-false}, 00:29:40.547 "ddgst": ${ddgst:-false} 00:29:40.547 }, 00:29:40.547 "method": "bdev_nvme_attach_controller" 00:29:40.547 } 00:29:40.547 EOF 00:29:40.547 )") 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.547 { 00:29:40.547 "params": { 00:29:40.547 "name": "Nvme$subsystem", 00:29:40.547 "trtype": "$TEST_TRANSPORT", 00:29:40.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.547 "adrfam": "ipv4", 00:29:40.547 "trsvcid": "$NVMF_PORT", 00:29:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.547 "hdgst": ${hdgst:-false}, 00:29:40.547 "ddgst": ${ddgst:-false} 00:29:40.547 }, 00:29:40.547 "method": "bdev_nvme_attach_controller" 00:29:40.547 } 00:29:40.547 EOF 00:29:40.547 )") 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.547 { 00:29:40.547 "params": { 00:29:40.547 "name": "Nvme$subsystem", 00:29:40.547 "trtype": "$TEST_TRANSPORT", 00:29:40.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.547 "adrfam": "ipv4", 00:29:40.547 "trsvcid": "$NVMF_PORT", 00:29:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.547 "hdgst": ${hdgst:-false}, 00:29:40.547 "ddgst": ${ddgst:-false} 00:29:40.547 }, 00:29:40.547 "method": "bdev_nvme_attach_controller" 00:29:40.547 } 00:29:40.547 EOF 00:29:40.547 )") 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.547 { 00:29:40.547 "params": { 00:29:40.547 "name": "Nvme$subsystem", 00:29:40.547 "trtype": "$TEST_TRANSPORT", 00:29:40.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.547 "adrfam": "ipv4", 00:29:40.547 "trsvcid": "$NVMF_PORT", 00:29:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.547 "hdgst": ${hdgst:-false}, 00:29:40.547 "ddgst": ${ddgst:-false} 00:29:40.547 }, 00:29:40.547 "method": "bdev_nvme_attach_controller" 00:29:40.547 } 00:29:40.547 EOF 00:29:40.547 )") 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.547 { 00:29:40.547 "params": { 00:29:40.547 "name": "Nvme$subsystem", 00:29:40.547 "trtype": "$TEST_TRANSPORT", 00:29:40.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.547 "adrfam": "ipv4", 00:29:40.547 "trsvcid": "$NVMF_PORT", 00:29:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.547 "hdgst": ${hdgst:-false}, 00:29:40.547 "ddgst": ${ddgst:-false} 00:29:40.547 }, 00:29:40.547 "method": "bdev_nvme_attach_controller" 00:29:40.547 } 00:29:40.547 EOF 00:29:40.547 )") 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.547 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.547 { 00:29:40.547 "params": { 00:29:40.547 "name": "Nvme$subsystem", 00:29:40.547 "trtype": "$TEST_TRANSPORT", 00:29:40.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.547 "adrfam": "ipv4", 00:29:40.547 "trsvcid": "$NVMF_PORT", 00:29:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.548 "hdgst": ${hdgst:-false}, 00:29:40.548 "ddgst": ${ddgst:-false} 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 } 00:29:40.548 EOF 00:29:40.548 )") 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.548 { 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme$subsystem", 00:29:40.548 "trtype": "$TEST_TRANSPORT", 00:29:40.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "$NVMF_PORT", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.548 "hdgst": ${hdgst:-false}, 00:29:40.548 "ddgst": ${ddgst:-false} 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 } 00:29:40.548 EOF 00:29:40.548 )") 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.548 { 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme$subsystem", 00:29:40.548 "trtype": "$TEST_TRANSPORT", 00:29:40.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "$NVMF_PORT", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.548 "hdgst": ${hdgst:-false}, 00:29:40.548 "ddgst": ${ddgst:-false} 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 } 00:29:40.548 EOF 00:29:40.548 )") 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.548 { 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme$subsystem", 00:29:40.548 "trtype": "$TEST_TRANSPORT", 00:29:40.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "$NVMF_PORT", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.548 "hdgst": ${hdgst:-false}, 00:29:40.548 "ddgst": ${ddgst:-false} 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 } 00:29:40.548 EOF 00:29:40.548 )") 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.548 { 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme$subsystem", 00:29:40.548 "trtype": "$TEST_TRANSPORT", 00:29:40.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "$NVMF_PORT", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.548 "hdgst": ${hdgst:-false}, 00:29:40.548 "ddgst": ${ddgst:-false} 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 } 00:29:40.548 EOF 00:29:40.548 )") 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:40.548 07:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme1", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme2", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme3", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme4", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme5", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme6", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme7", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme8", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme9", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 },{ 00:29:40.548 "params": { 00:29:40.548 "name": "Nvme10", 00:29:40.548 "trtype": "tcp", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "adrfam": "ipv4", 00:29:40.548 "trsvcid": "4420", 00:29:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:40.548 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:40.548 "hdgst": false, 00:29:40.548 "ddgst": false 00:29:40.548 }, 00:29:40.548 "method": "bdev_nvme_attach_controller" 00:29:40.548 }' 00:29:40.548 [2024-07-15 07:56:31.582569] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:40.548 [2024-07-15 07:56:31.582715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170377 ] 00:29:40.548 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.548 [2024-07-15 07:56:31.722049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.809 [2024-07-15 07:56:31.963498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.708 Running I/O for 1 seconds... 00:29:44.085 00:29:44.085 Latency(us) 00:29:44.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.085 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme1n1 : 1.12 172.03 10.75 0.00 0.00 367655.70 24078.41 307582.29 00:29:44.085 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme2n1 : 1.13 173.58 10.85 0.00 0.00 356597.84 3349.62 309135.74 00:29:44.085 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme3n1 : 1.18 217.15 13.57 0.00 0.00 280984.46 19515.16 293601.28 00:29:44.085 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme4n1 : 1.17 219.70 13.73 0.00 0.00 273342.20 24758.04 278066.82 00:29:44.085 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme5n1 : 1.20 213.41 13.34 0.00 0.00 276758.38 21068.61 302921.96 00:29:44.085 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme6n1 : 1.17 164.01 10.25 0.00 0.00 353183.42 29127.11 338651.21 00:29:44.085 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme7n1 : 1.21 211.17 13.20 0.00 0.00 270233.41 21456.97 310689.19 00:29:44.085 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme8n1 : 1.20 214.22 13.39 0.00 0.00 261018.17 23884.23 316902.97 00:29:44.085 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme9n1 : 1.22 209.61 13.10 0.00 0.00 262704.17 25243.50 321563.31 00:29:44.085 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.085 Verification LBA range: start 0x0 length 0x400 00:29:44.085 Nvme10n1 : 1.19 161.16 10.07 0.00 0.00 333688.98 27185.30 377487.36 00:29:44.085 =================================================================================================================== 00:29:44.085 Total : 1956.04 122.25 0.00 0.00 298280.47 3349.62 377487.36 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.021 rmmod nvme_tcp 00:29:45.021 rmmod nvme_fabrics 00:29:45.021 rmmod nvme_keyring 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1169635 ']' 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1169635 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1169635 ']' 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1169635 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1169635 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1169635' 00:29:45.021 killing process with pid 1169635 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1169635 00:29:45.021 07:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1169635 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.331 07:56:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:50.241 00:29:50.241 real 0m17.618s 00:29:50.241 user 0m57.020s 00:29:50.241 sys 0m3.986s 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.241 ************************************ 00:29:50.241 END TEST nvmf_shutdown_tc1 00:29:50.241 ************************************ 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.241 ************************************ 00:29:50.241 START TEST nvmf_shutdown_tc2 00:29:50.241 ************************************ 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:50.241 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.242 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.242 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.242 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:50.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:29:50.242 00:29:50.242 --- 10.0.0.2 ping statistics --- 00:29:50.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.242 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:29:50.242 00:29:50.242 --- 10.0.0.1 ping statistics --- 00:29:50.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.242 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.242 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1171652 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1171652 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1171652 ']' 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:50.243 07:56:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.503 [2024-07-15 07:56:41.487952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:50.503 [2024-07-15 07:56:41.488090] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.503 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.503 [2024-07-15 07:56:41.624606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.764 [2024-07-15 07:56:41.885119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.764 [2024-07-15 07:56:41.885197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.764 [2024-07-15 07:56:41.885225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.764 [2024-07-15 07:56:41.885247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.764 [2024-07-15 07:56:41.885277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.764 [2024-07-15 07:56:41.885409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.764 [2024-07-15 07:56:41.885521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.764 [2024-07-15 07:56:41.885562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.764 [2024-07-15 07:56:41.885572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.331 [2024-07-15 07:56:42.460449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.331 07:56:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.589 Malloc1 00:29:51.589 [2024-07-15 07:56:42.587643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.589 Malloc2 00:29:51.589 Malloc3 00:29:51.847 Malloc4 00:29:51.847 Malloc5 00:29:51.847 Malloc6 00:29:52.128 Malloc7 00:29:52.128 Malloc8 00:29:52.386 Malloc9 00:29:52.386 Malloc10 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1171965 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1171965 /var/tmp/bdevperf.sock 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1171965 ']' 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:52.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.386 { 00:29:52.386 "params": { 00:29:52.386 "name": "Nvme$subsystem", 00:29:52.386 "trtype": "$TEST_TRANSPORT", 00:29:52.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.386 "adrfam": "ipv4", 00:29:52.386 "trsvcid": "$NVMF_PORT", 00:29:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.386 "hdgst": ${hdgst:-false}, 00:29:52.386 "ddgst": ${ddgst:-false} 00:29:52.386 }, 00:29:52.386 "method": "bdev_nvme_attach_controller" 00:29:52.386 } 00:29:52.386 EOF 00:29:52.386 )") 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.386 { 00:29:52.386 "params": { 00:29:52.386 "name": "Nvme$subsystem", 00:29:52.386 "trtype": "$TEST_TRANSPORT", 00:29:52.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.386 "adrfam": "ipv4", 00:29:52.386 "trsvcid": "$NVMF_PORT", 00:29:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.386 "hdgst": ${hdgst:-false}, 00:29:52.386 "ddgst": ${ddgst:-false} 00:29:52.386 }, 00:29:52.386 "method": "bdev_nvme_attach_controller" 00:29:52.386 } 00:29:52.386 EOF 00:29:52.386 )") 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.386 { 00:29:52.386 "params": { 00:29:52.386 "name": "Nvme$subsystem", 00:29:52.386 "trtype": "$TEST_TRANSPORT", 00:29:52.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.386 "adrfam": "ipv4", 00:29:52.386 "trsvcid": "$NVMF_PORT", 00:29:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.386 "hdgst": ${hdgst:-false}, 00:29:52.386 "ddgst": ${ddgst:-false} 00:29:52.386 }, 00:29:52.386 "method": "bdev_nvme_attach_controller" 00:29:52.386 } 00:29:52.386 EOF 00:29:52.386 )") 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.386 { 00:29:52.386 "params": { 00:29:52.386 "name": "Nvme$subsystem", 00:29:52.386 "trtype": "$TEST_TRANSPORT", 00:29:52.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.386 "adrfam": "ipv4", 00:29:52.386 "trsvcid": "$NVMF_PORT", 00:29:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.386 "hdgst": ${hdgst:-false}, 00:29:52.386 "ddgst": ${ddgst:-false} 00:29:52.386 }, 00:29:52.386 "method": "bdev_nvme_attach_controller" 00:29:52.386 } 00:29:52.386 EOF 00:29:52.386 )") 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.386 { 00:29:52.386 "params": { 00:29:52.386 "name": "Nvme$subsystem", 00:29:52.386 "trtype": "$TEST_TRANSPORT", 00:29:52.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.386 "adrfam": "ipv4", 00:29:52.386 "trsvcid": "$NVMF_PORT", 00:29:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.386 "hdgst": ${hdgst:-false}, 00:29:52.386 "ddgst": ${ddgst:-false} 00:29:52.386 }, 00:29:52.386 "method": "bdev_nvme_attach_controller" 00:29:52.386 } 00:29:52.386 EOF 00:29:52.386 )") 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.386 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.386 { 00:29:52.386 "params": { 00:29:52.387 "name": "Nvme$subsystem", 00:29:52.387 "trtype": "$TEST_TRANSPORT", 00:29:52.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "$NVMF_PORT", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.387 "hdgst": ${hdgst:-false}, 00:29:52.387 "ddgst": ${ddgst:-false} 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 } 00:29:52.387 EOF 00:29:52.387 )") 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.387 { 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme$subsystem", 00:29:52.387 "trtype": "$TEST_TRANSPORT", 00:29:52.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "$NVMF_PORT", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.387 "hdgst": ${hdgst:-false}, 00:29:52.387 "ddgst": ${ddgst:-false} 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 } 00:29:52.387 EOF 00:29:52.387 )") 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.387 { 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme$subsystem", 00:29:52.387 "trtype": "$TEST_TRANSPORT", 00:29:52.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "$NVMF_PORT", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.387 "hdgst": ${hdgst:-false}, 00:29:52.387 "ddgst": ${ddgst:-false} 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 } 00:29:52.387 EOF 00:29:52.387 )") 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.387 { 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme$subsystem", 00:29:52.387 "trtype": "$TEST_TRANSPORT", 00:29:52.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "$NVMF_PORT", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.387 "hdgst": ${hdgst:-false}, 00:29:52.387 "ddgst": ${ddgst:-false} 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 } 00:29:52.387 EOF 00:29:52.387 )") 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.387 { 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme$subsystem", 00:29:52.387 "trtype": "$TEST_TRANSPORT", 00:29:52.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "$NVMF_PORT", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.387 "hdgst": ${hdgst:-false}, 00:29:52.387 "ddgst": ${ddgst:-false} 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 } 00:29:52.387 EOF 00:29:52.387 )") 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:52.387 07:56:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme1", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme2", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme3", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme4", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme5", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme6", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme7", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme8", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme9", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 },{ 00:29:52.387 "params": { 00:29:52.387 "name": "Nvme10", 00:29:52.387 "trtype": "tcp", 00:29:52.387 "traddr": "10.0.0.2", 00:29:52.387 "adrfam": "ipv4", 00:29:52.387 "trsvcid": "4420", 00:29:52.387 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:52.387 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:52.387 "hdgst": false, 00:29:52.387 "ddgst": false 00:29:52.387 }, 00:29:52.387 "method": "bdev_nvme_attach_controller" 00:29:52.387 }' 00:29:52.387 [2024-07-15 07:56:43.593722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:52.387 [2024-07-15 07:56:43.593873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171965 ] 00:29:52.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.645 [2024-07-15 07:56:43.721322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.902 [2024-07-15 07:56:43.960748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.797 Running I/O for 10 seconds... 00:29:55.055 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.055 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:55.055 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:55.055 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.055 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:55.312 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:55.569 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1171965 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1171965 ']' 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1171965 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1171965 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1171965' 00:29:55.825 killing process with pid 1171965 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1171965 00:29:55.825 07:56:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1171965 00:29:55.825 Received shutdown signal, test time was about 1.047528 seconds 00:29:55.825 00:29:55.825 Latency(us) 00:29:55.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.825 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme1n1 : 1.01 189.36 11.83 0.00 0.00 333991.63 24272.59 301368.51 00:29:55.825 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme2n1 : 1.01 206.40 12.90 0.00 0.00 291975.48 14757.74 296708.17 00:29:55.825 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme3n1 : 1.03 186.33 11.65 0.00 0.00 325770.56 25243.50 301368.51 00:29:55.825 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme4n1 : 1.04 252.45 15.78 0.00 0.00 235599.67 3082.62 301368.51 00:29:55.825 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme5n1 : 1.00 206.46 12.90 0.00 0.00 275559.12 18447.17 276513.37 00:29:55.825 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme6n1 : 1.02 188.29 11.77 0.00 0.00 302602.11 23204.60 299815.06 00:29:55.825 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme7n1 : 1.00 199.58 12.47 0.00 0.00 275778.24 9757.58 292047.83 00:29:55.825 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme8n1 : 0.99 193.61 12.10 0.00 0.00 280211.41 39418.69 302921.96 00:29:55.825 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme9n1 : 1.03 186.53 11.66 0.00 0.00 285445.69 22622.06 320009.86 00:29:55.825 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.825 Verification LBA range: start 0x0 length 0x400 00:29:55.825 Nvme10n1 : 1.05 183.44 11.47 0.00 0.00 285663.64 25243.50 338651.21 00:29:55.825 =================================================================================================================== 00:29:55.825 Total : 1992.44 124.53 0.00 0.00 287281.14 3082.62 338651.21 00:29:57.202 07:56:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:58.137 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1171652 00:29:58.137 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:58.138 rmmod nvme_tcp 00:29:58.138 rmmod nvme_fabrics 00:29:58.138 rmmod nvme_keyring 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1171652 ']' 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1171652 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1171652 ']' 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1171652 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1171652 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1171652' 00:29:58.138 killing process with pid 1171652 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1171652 00:29:58.138 07:56:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1171652 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.424 07:56:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:03.331 00:30:03.331 real 0m12.990s 00:30:03.331 user 0m43.620s 00:30:03.331 sys 0m1.999s 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.331 ************************************ 00:30:03.331 END TEST nvmf_shutdown_tc2 00:30:03.331 ************************************ 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:03.331 ************************************ 00:30:03.331 START TEST nvmf_shutdown_tc3 00:30:03.331 ************************************ 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:03.331 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:03.331 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:03.331 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.331 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:03.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:03.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:30:03.332 00:30:03.332 --- 10.0.0.2 ping statistics --- 00:30:03.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.332 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:30:03.332 00:30:03.332 --- 10.0.0.1 ping statistics --- 00:30:03.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.332 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1173398 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1173398 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1173398 ']' 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.332 07:56:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.332 [2024-07-15 07:56:54.508980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:03.332 [2024-07-15 07:56:54.509130] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.589 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.589 [2024-07-15 07:56:54.642198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.869 [2024-07-15 07:56:54.899844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.869 [2024-07-15 07:56:54.899924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.869 [2024-07-15 07:56:54.899963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.869 [2024-07-15 07:56:54.899985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.869 [2024-07-15 07:56:54.900009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.869 [2024-07-15 07:56:54.900138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.869 [2024-07-15 07:56:54.900236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.869 [2024-07-15 07:56:54.900291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.869 [2024-07-15 07:56:54.900300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:04.437 [2024-07-15 07:56:55.496601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.437 07:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:04.437 Malloc1 00:30:04.437 [2024-07-15 07:56:55.637820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.703 Malloc2 00:30:04.703 Malloc3 00:30:04.703 Malloc4 00:30:04.977 Malloc5 00:30:04.977 Malloc6 00:30:05.235 Malloc7 00:30:05.235 Malloc8 00:30:05.235 Malloc9 00:30:05.493 Malloc10 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1173711 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1173711 /var/tmp/bdevperf.sock 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1173711 ']' 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:05.493 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.493 { 00:30:05.493 "params": { 00:30:05.493 "name": "Nvme$subsystem", 00:30:05.493 "trtype": "$TEST_TRANSPORT", 00:30:05.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.493 "adrfam": "ipv4", 00:30:05.493 "trsvcid": "$NVMF_PORT", 00:30:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.493 "hdgst": ${hdgst:-false}, 00:30:05.493 "ddgst": ${ddgst:-false} 00:30:05.493 }, 00:30:05.493 "method": "bdev_nvme_attach_controller" 00:30:05.493 } 00:30:05.493 EOF 00:30:05.493 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.494 { 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme$subsystem", 00:30:05.494 "trtype": "$TEST_TRANSPORT", 00:30:05.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "$NVMF_PORT", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.494 "hdgst": ${hdgst:-false}, 00:30:05.494 "ddgst": ${ddgst:-false} 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 } 00:30:05.494 EOF 00:30:05.494 )") 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:05.494 07:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme1", 00:30:05.494 "trtype": "tcp", 00:30:05.494 "traddr": "10.0.0.2", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "4420", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.494 "hdgst": false, 00:30:05.494 "ddgst": false 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 },{ 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme2", 00:30:05.494 "trtype": "tcp", 00:30:05.494 "traddr": "10.0.0.2", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "4420", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:05.494 "hdgst": false, 00:30:05.494 "ddgst": false 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 },{ 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme3", 00:30:05.494 "trtype": "tcp", 00:30:05.494 "traddr": "10.0.0.2", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "4420", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:05.494 "hdgst": false, 00:30:05.494 "ddgst": false 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 },{ 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme4", 00:30:05.494 "trtype": "tcp", 00:30:05.494 "traddr": "10.0.0.2", 00:30:05.494 "adrfam": "ipv4", 00:30:05.494 "trsvcid": "4420", 00:30:05.494 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:05.494 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:05.494 "hdgst": false, 00:30:05.494 "ddgst": false 00:30:05.494 }, 00:30:05.494 "method": "bdev_nvme_attach_controller" 00:30:05.494 },{ 00:30:05.494 "params": { 00:30:05.494 "name": "Nvme5", 00:30:05.494 "trtype": "tcp", 00:30:05.494 "traddr": "10.0.0.2", 00:30:05.494 "adrfam": "ipv4", 00:30:05.495 "trsvcid": "4420", 00:30:05.495 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:05.495 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:05.495 "hdgst": false, 00:30:05.495 "ddgst": false 00:30:05.495 }, 00:30:05.495 "method": "bdev_nvme_attach_controller" 00:30:05.495 },{ 00:30:05.495 "params": { 00:30:05.495 "name": "Nvme6", 00:30:05.495 "trtype": "tcp", 00:30:05.495 "traddr": "10.0.0.2", 00:30:05.495 "adrfam": "ipv4", 00:30:05.495 "trsvcid": "4420", 00:30:05.495 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:05.495 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:05.495 "hdgst": false, 00:30:05.495 "ddgst": false 00:30:05.495 }, 00:30:05.495 "method": "bdev_nvme_attach_controller" 00:30:05.495 },{ 00:30:05.495 "params": { 00:30:05.495 "name": "Nvme7", 00:30:05.495 "trtype": "tcp", 00:30:05.495 "traddr": "10.0.0.2", 00:30:05.495 "adrfam": "ipv4", 00:30:05.495 "trsvcid": "4420", 00:30:05.495 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:05.495 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:05.495 "hdgst": false, 00:30:05.495 "ddgst": false 00:30:05.495 }, 00:30:05.495 "method": "bdev_nvme_attach_controller" 00:30:05.495 },{ 00:30:05.495 "params": { 00:30:05.495 "name": "Nvme8", 00:30:05.495 "trtype": "tcp", 00:30:05.495 "traddr": "10.0.0.2", 00:30:05.495 "adrfam": "ipv4", 00:30:05.495 "trsvcid": "4420", 00:30:05.495 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:05.495 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:05.495 "hdgst": false, 00:30:05.495 "ddgst": false 00:30:05.495 }, 00:30:05.495 "method": "bdev_nvme_attach_controller" 00:30:05.495 },{ 00:30:05.495 "params": { 00:30:05.495 "name": "Nvme9", 00:30:05.495 "trtype": "tcp", 00:30:05.495 "traddr": "10.0.0.2", 00:30:05.495 "adrfam": "ipv4", 00:30:05.495 "trsvcid": "4420", 00:30:05.495 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:05.495 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:05.495 "hdgst": false, 00:30:05.495 "ddgst": false 00:30:05.495 }, 00:30:05.495 "method": "bdev_nvme_attach_controller" 00:30:05.495 },{ 00:30:05.495 "params": { 00:30:05.495 "name": "Nvme10", 00:30:05.495 "trtype": "tcp", 00:30:05.495 "traddr": "10.0.0.2", 00:30:05.495 "adrfam": "ipv4", 00:30:05.495 "trsvcid": "4420", 00:30:05.495 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:05.495 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:05.495 "hdgst": false, 00:30:05.495 "ddgst": false 00:30:05.495 }, 00:30:05.495 "method": "bdev_nvme_attach_controller" 00:30:05.495 }' 00:30:05.495 [2024-07-15 07:56:56.645496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:05.495 [2024-07-15 07:56:56.645654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173711 ] 00:30:05.495 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.752 [2024-07-15 07:56:56.775330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.009 [2024-07-15 07:56:57.012096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.910 Running I/O for 10 seconds... 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:08.168 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.427 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:08.427 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:08.427 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:08.427 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:08.427 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1173398 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1173398 ']' 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1173398 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173398 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173398' 00:30:08.700 killing process with pid 1173398 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1173398 00:30:08.700 07:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1173398 00:30:08.700 [2024-07-15 07:56:59.726438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.726989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.727782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.701 [2024-07-15 07:56:59.730747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.730997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.731470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.734993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:30:08.702 [2024-07-15 07:56:59.735809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.735873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.735975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.702 [2024-07-15 07:56:59.736705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.702 [2024-07-15 07:56:59.736730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.736753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.736777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.736820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.736847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.736871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.736904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.736936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.736960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.736983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.737966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.737994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-15 07:56:59.738555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.703 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-07-15 07:56:59.738582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 with the state(5) to be set 00:30:08.703 [2024-07-15 07:56:59.738606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-15 07:56:59.738607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.703 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.703 [2024-07-15 07:56:59.738633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.703 [2024-07-15 07:56:59.738645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.703 [2024-07-15 07:56:59.738655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.703 [2024-07-15 07:56:59.738663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-15 07:56:59.738679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1with the state(5) to be set 00:30:08.704 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 [2024-07-15 07:56:59.738701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.738719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 [2024-07-15 07:56:59.738737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-15 07:56:59.738755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.704 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.738779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 [2024-07-15 07:56:59.738798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.738816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-15 07:56:59.738832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1with the state(5) to be set 00:30:08.704 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 [2024-07-15 07:56:59.738854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.738871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 [2024-07-15 07:56:59.738898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.738917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 [2024-07-15 07:56:59.738952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.738970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.738984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-15 07:56:59.738988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.704 with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-15 07:56:59.739008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.704 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.704 [2024-07-15 07:56:59.739027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739318] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8900 was disconnected a[2024-07-15 07:56:59.739327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same nd freed. reset controller. 00:30:08.704 with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.739730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.742238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.742279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.742300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.704 [2024-07-15 07:56:59.742318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.742537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.745991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.746560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.705 [2024-07-15 07:56:59.749916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.749936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.749954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.749972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.749990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.750965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.706 [2024-07-15 07:56:59.754946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.754964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.754982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.754999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.755474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.762831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.762910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.762940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.762967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.762990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.763162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.763427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.763700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.763864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.763893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.763989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.764250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:08.707 [2024-07-15 07:56:59.764499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.707 [2024-07-15 07:56:59.764574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.707 [2024-07-15 07:56:59.764597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.764618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.764640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.764663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.764683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:08.708 [2024-07-15 07:56:59.764751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.764779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.764803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.764825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.764847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.764868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.764900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.764938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.764959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:08.708 [2024-07-15 07:56:59.765024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:08.708 [2024-07-15 07:56:59.765275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.708 [2024-07-15 07:56:59.765434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.765454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:08.708 [2024-07-15 07:56:59.767895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.767945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.767989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.768966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.768989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.769014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.769037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.769062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.769085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.769109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.769132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.769156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.769191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.769216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.769240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.708 [2024-07-15 07:56:59.769264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.708 [2024-07-15 07:56:59.769288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.769969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.769998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.770953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.770977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.771003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.771025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.771050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.771073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.709 [2024-07-15 07:56:59.771098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.709 [2024-07-15 07:56:59.771121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.771145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.771168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.771202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9080 is same with the state(5) to be set 00:30:08.710 [2024-07-15 07:56:59.771541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9080 was disconnected and freed. reset controller. 00:30:08.710 [2024-07-15 07:56:59.771911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.771955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.771988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.772959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.772983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.710 [2024-07-15 07:56:59.773957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.710 [2024-07-15 07:56:59.773980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.774954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.774978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.775002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.775050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.775103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.775126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.775463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9800 was disconnected and freed. reset controller. 00:30:08.711 [2024-07-15 07:56:59.775721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:08.711 [2024-07-15 07:56:59.775808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.775908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.775962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:08.711 [2024-07-15 07:56:59.776807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.776839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.776874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.776927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.776954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.776978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.711 [2024-07-15 07:56:59.777512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.711 [2024-07-15 07:56:59.777535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.777561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.777584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.777609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.777632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.777658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.777680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.777705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.777728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.777757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.777782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.777804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:30:08.712 [2024-07-15 07:56:59.778105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8e00 was disconnected and freed. reset controller. 00:30:08.712 [2024-07-15 07:56:59.780844] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.712 [2024-07-15 07:56:59.781030] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.712 [2024-07-15 07:56:59.782934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:08.712 [2024-07-15 07:56:59.782976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:08.712 [2024-07-15 07:56:59.783196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.712 [2024-07-15 07:56:59.783244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:30:08.712 [2024-07-15 07:56:59.783270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:08.712 [2024-07-15 07:56:59.784029] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.712 [2024-07-15 07:56:59.784122] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.712 [2024-07-15 07:56:59.784687] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.712 [2024-07-15 07:56:59.784740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:08.712 [2024-07-15 07:56:59.784945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.712 [2024-07-15 07:56:59.784982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:08.712 [2024-07-15 07:56:59.785006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:08.712 [2024-07-15 07:56:59.785129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.712 [2024-07-15 07:56:59.785163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:30:08.712 [2024-07-15 07:56:59.785186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:08.712 [2024-07-15 07:56:59.785214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:08.712 [2024-07-15 07:56:59.785316] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.712 [2024-07-15 07:56:59.786172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.712 [2024-07-15 07:56:59.786220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:30:08.712 [2024-07-15 07:56:59.786244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:08.712 [2024-07-15 07:56:59.786272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:08.712 [2024-07-15 07:56:59.786301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:08.712 [2024-07-15 07:56:59.786327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:08.712 [2024-07-15 07:56:59.786349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:08.712 [2024-07-15 07:56:59.786377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:08.712 [2024-07-15 07:56:59.786629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.712 [2024-07-15 07:56:59.786703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:08.712 [2024-07-15 07:56:59.786732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:08.712 [2024-07-15 07:56:59.786752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:08.712 [2024-07-15 07:56:59.786773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:08.712 [2024-07-15 07:56:59.786801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:08.712 [2024-07-15 07:56:59.786821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:08.712 [2024-07-15 07:56:59.786840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:08.712 [2024-07-15 07:56:59.786919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.786956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.786990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.787960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.712 [2024-07-15 07:56:59.787984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.712 [2024-07-15 07:56:59.788012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.788964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.788986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.713 [2024-07-15 07:56:59.789779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.713 [2024-07-15 07:56:59.789803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.789826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.789853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.789899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.789926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.789948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.789972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.789996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.790020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.790042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.790066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.790089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.790111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:30:08.714 [2024-07-15 07:56:59.791647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.791678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.791708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.791732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.791756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.791780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.791804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.791827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.791850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.791872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.791931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.791955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.791979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.792953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.792983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.714 [2024-07-15 07:56:59.793467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.714 [2024-07-15 07:56:59.793489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.793955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.793977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.794791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.794812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:30:08.715 [2024-07-15 07:56:59.796399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.796972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.797018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.797042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.797065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.715 [2024-07-15 07:56:59.797089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.715 [2024-07-15 07:56:59.797111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.797958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.797981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.798976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.798998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.799023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.799045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.799068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.799107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.716 [2024-07-15 07:56:59.799134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.716 [2024-07-15 07:56:59.799157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.799489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.799511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:30:08.717 [2024-07-15 07:56:59.801044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.801968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.801993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.717 [2024-07-15 07:56:59.802396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.717 [2024-07-15 07:56:59.802419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.802962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.802984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.803963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.803985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.804009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.804032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.804056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.804078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.804102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.804124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.804148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.804177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.804214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9580 is same with the state(5) to be set 00:30:08.718 [2024-07-15 07:56:59.805729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.805759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.805788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.805887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.805914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.805938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.805962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.805985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.806009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.718 [2024-07-15 07:56:59.806055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.718 [2024-07-15 07:56:59.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.806968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.806990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.807966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.807990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.808012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.808060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.808083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.719 [2024-07-15 07:56:59.808106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.719 [2024-07-15 07:56:59.808130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.808825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.808852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:30:08.720 [2024-07-15 07:56:59.813527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.813961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.813983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.720 [2024-07-15 07:56:59.814867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.720 [2024-07-15 07:56:59.814901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.814927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.814950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.814974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.814997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.815963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.815992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.721 [2024-07-15 07:56:59.816620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.721 [2024-07-15 07:56:59.816642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.722 [2024-07-15 07:56:59.816664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9d00 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.821497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.722 [2024-07-15 07:56:59.821532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.722 [2024-07-15 07:56:59.821553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.722 [2024-07-15 07:56:59.821578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:08.722 [2024-07-15 07:56:59.821602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:08.722 [2024-07-15 07:56:59.821688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:08.722 [2024-07-15 07:56:59.821714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:08.722 [2024-07-15 07:56:59.821733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:08.722 [2024-07-15 07:56:59.821815] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.821850] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.821903] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.821937] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.822063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:08.722 [2024-07-15 07:56:59.822098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:08.722 task offset: 24576 on job bdev=Nvme2n1 fails 00:30:08.722 00:30:08.722 Latency(us) 00:30:08.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme1n1 ended in about 1.02 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme1n1 : 1.02 126.07 7.88 63.04 0.00 334749.65 23690.05 265639.25 00:30:08.722 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme2n1 ended in about 0.99 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme2n1 : 0.99 193.78 12.11 64.59 0.00 239865.93 21068.61 301368.51 00:30:08.722 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme3n1 ended in about 1.02 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme3n1 : 1.02 125.49 7.84 62.75 0.00 323064.16 38447.79 306028.85 00:30:08.722 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme4n1 ended in about 1.01 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme4n1 : 1.01 190.75 11.92 18.88 0.00 280639.72 21845.33 288940.94 00:30:08.722 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme5n1 ended in about 1.00 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme5n1 : 1.00 191.37 11.96 63.79 0.00 228066.61 16893.72 298261.62 00:30:08.722 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme6n1 ended in about 1.02 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme6n1 : 1.02 124.92 7.81 62.46 0.00 304658.96 26020.22 313796.08 00:30:08.722 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme7n1 ended in about 1.03 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme7n1 : 1.03 124.35 7.77 62.18 0.00 299594.33 23204.60 299815.06 00:30:08.722 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme8n1 ended in about 1.00 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme8n1 : 1.00 127.41 7.96 63.70 0.00 284933.69 17282.09 326223.64 00:30:08.722 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme9n1 ended in about 1.03 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme9n1 : 1.03 127.66 7.98 61.90 0.00 282223.63 21554.06 302921.96 00:30:08.722 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.722 Job: Nvme10n1 ended in about 1.04 seconds with error 00:30:08.722 Verification LBA range: start 0x0 length 0x400 00:30:08.722 Nvme10n1 : 1.04 122.87 7.68 61.43 0.00 284213.35 21262.79 329330.54 00:30:08.722 =================================================================================================================== 00:30:08.722 Total : 1454.68 90.92 584.72 0.00 282913.89 16893.72 329330.54 00:30:08.722 [2024-07-15 07:56:59.904362] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:08.722 [2024-07-15 07:56:59.904467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:08.722 [2024-07-15 07:56:59.904513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.722 [2024-07-15 07:56:59.904918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.904968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.904998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.905168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.905203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.905226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.905364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.905397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.905420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.908460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:08.722 [2024-07-15 07:56:59.908692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.908731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.908755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.908954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.908999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.909024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.909159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.909194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.909217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.909251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:08.722 [2024-07-15 07:56:59.909286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:08.722 [2024-07-15 07:56:59.909313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:08.722 [2024-07-15 07:56:59.909381] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.909433] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.909481] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.909512] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.909539] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.722 [2024-07-15 07:56:59.909717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:08.722 [2024-07-15 07:56:59.909751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:08.722 [2024-07-15 07:56:59.910023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.910059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.910083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:08.722 [2024-07-15 07:56:59.910111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:08.722 [2024-07-15 07:56:59.910139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:30:08.722 [2024-07-15 07:56:59.910166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:08.722 [2024-07-15 07:56:59.910190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.722 [2024-07-15 07:56:59.910211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.722 [2024-07-15 07:56:59.910234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.722 [2024-07-15 07:56:59.910263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:08.722 [2024-07-15 07:56:59.910298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:08.722 [2024-07-15 07:56:59.910317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:08.722 [2024-07-15 07:56:59.910343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:08.722 [2024-07-15 07:56:59.910379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:08.722 [2024-07-15 07:56:59.910404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:08.722 [2024-07-15 07:56:59.910552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:08.722 [2024-07-15 07:56:59.910584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.722 [2024-07-15 07:56:59.910605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.722 [2024-07-15 07:56:59.910622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.722 [2024-07-15 07:56:59.910842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.722 [2024-07-15 07:56:59.910889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:30:08.722 [2024-07-15 07:56:59.910915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:08.723 [2024-07-15 07:56:59.911095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.723 [2024-07-15 07:56:59.911129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:08.723 [2024-07-15 07:56:59.911152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:08.723 [2024-07-15 07:56:59.911179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:08.723 [2024-07-15 07:56:59.911204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.911224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.911243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:08.723 [2024-07-15 07:56:59.911277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.911299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.911319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:08.723 [2024-07-15 07:56:59.911344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.911364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.911382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:08.723 [2024-07-15 07:56:59.911442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.723 [2024-07-15 07:56:59.911468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.723 [2024-07-15 07:56:59.911484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.723 [2024-07-15 07:56:59.911674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.723 [2024-07-15 07:56:59.911709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:30:08.723 [2024-07-15 07:56:59.911732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:08.723 [2024-07-15 07:56:59.911760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:08.723 [2024-07-15 07:56:59.911789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:08.723 [2024-07-15 07:56:59.911814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.911838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.911859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:08.723 [2024-07-15 07:56:59.911932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.723 [2024-07-15 07:56:59.911965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:08.723 [2024-07-15 07:56:59.911991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.912011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.912030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:08.723 [2024-07-15 07:56:59.912056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.912076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.912096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:08.723 [2024-07-15 07:56:59.912155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.723 [2024-07-15 07:56:59.912196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.723 [2024-07-15 07:56:59.912215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:08.723 [2024-07-15 07:56:59.912233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:08.723 [2024-07-15 07:56:59.912252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:08.723 [2024-07-15 07:56:59.912312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.007 07:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:12.007 07:57:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1173711 00:30:12.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1173711) - No such process 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:12.574 rmmod nvme_tcp 00:30:12.574 rmmod nvme_fabrics 00:30:12.574 rmmod nvme_keyring 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.574 07:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:15.110 00:30:15.110 real 0m11.553s 00:30:15.110 user 0m33.477s 00:30:15.110 sys 0m1.971s 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:15.110 ************************************ 00:30:15.110 END TEST nvmf_shutdown_tc3 00:30:15.110 ************************************ 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:15.110 00:30:15.110 real 0m42.382s 00:30:15.110 user 2m14.216s 00:30:15.110 sys 0m8.094s 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.110 07:57:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:15.110 ************************************ 00:30:15.110 END TEST nvmf_shutdown 00:30:15.110 ************************************ 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:15.110 07:57:05 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.110 07:57:05 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.110 07:57:05 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:15.110 07:57:05 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.110 07:57:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.110 ************************************ 00:30:15.110 START TEST nvmf_multicontroller 00:30:15.110 ************************************ 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:15.110 * Looking for test storage... 00:30:15.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:15.110 07:57:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:17.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:17.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.015 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:17.016 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:17.016 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:30:17.016 00:30:17.016 --- 10.0.0.2 ping statistics --- 00:30:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.016 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:30:17.016 00:30:17.016 --- 10.0.0.1 ping statistics --- 00:30:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.016 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1176910 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1176910 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1176910 ']' 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.016 07:57:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 [2024-07-15 07:57:08.060112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:17.016 [2024-07-15 07:57:08.060284] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.016 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.016 [2024-07-15 07:57:08.210397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:17.275 [2024-07-15 07:57:08.467716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.275 [2024-07-15 07:57:08.467800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.275 [2024-07-15 07:57:08.467833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.275 [2024-07-15 07:57:08.467854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.275 [2024-07-15 07:57:08.467884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.275 [2024-07-15 07:57:08.468031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.275 [2024-07-15 07:57:08.468108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.275 [2024-07-15 07:57:08.468135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.842 [2024-07-15 07:57:09.056250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.842 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 Malloc0 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 [2024-07-15 07:57:09.164018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 [2024-07-15 07:57:09.171834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 Malloc1 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1177254 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1177254 /var/tmp/bdevperf.sock 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1177254 ']' 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.102 07:57:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 NVMe0n1 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.479 1 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 request: 00:30:19.479 { 00:30:19.479 "name": "NVMe0", 00:30:19.479 "trtype": "tcp", 00:30:19.479 "traddr": "10.0.0.2", 00:30:19.479 "adrfam": "ipv4", 00:30:19.479 "trsvcid": "4420", 00:30:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.479 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:19.479 "hostaddr": "10.0.0.2", 00:30:19.479 "hostsvcid": "60000", 00:30:19.479 "prchk_reftag": false, 00:30:19.479 "prchk_guard": false, 00:30:19.479 "hdgst": false, 00:30:19.479 "ddgst": false, 00:30:19.479 "method": "bdev_nvme_attach_controller", 00:30:19.479 "req_id": 1 00:30:19.479 } 00:30:19.479 Got JSON-RPC error response 00:30:19.479 response: 00:30:19.479 { 00:30:19.479 "code": -114, 00:30:19.479 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:19.479 } 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 request: 00:30:19.479 { 00:30:19.479 "name": "NVMe0", 00:30:19.479 "trtype": "tcp", 00:30:19.479 "traddr": "10.0.0.2", 00:30:19.479 "adrfam": "ipv4", 00:30:19.479 "trsvcid": "4420", 00:30:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:19.479 "hostaddr": "10.0.0.2", 00:30:19.479 "hostsvcid": "60000", 00:30:19.479 "prchk_reftag": false, 00:30:19.479 "prchk_guard": false, 00:30:19.479 "hdgst": false, 00:30:19.479 "ddgst": false, 00:30:19.479 "method": "bdev_nvme_attach_controller", 00:30:19.479 "req_id": 1 00:30:19.479 } 00:30:19.479 Got JSON-RPC error response 00:30:19.479 response: 00:30:19.479 { 00:30:19.479 "code": -114, 00:30:19.479 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:19.479 } 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.479 request: 00:30:19.479 { 00:30:19.479 "name": "NVMe0", 00:30:19.479 "trtype": "tcp", 00:30:19.479 "traddr": "10.0.0.2", 00:30:19.479 "adrfam": "ipv4", 00:30:19.479 "trsvcid": "4420", 00:30:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.479 "hostaddr": "10.0.0.2", 00:30:19.479 "hostsvcid": "60000", 00:30:19.479 "prchk_reftag": false, 00:30:19.479 "prchk_guard": false, 00:30:19.479 "hdgst": false, 00:30:19.479 "ddgst": false, 00:30:19.479 "multipath": "disable", 00:30:19.479 "method": "bdev_nvme_attach_controller", 00:30:19.479 "req_id": 1 00:30:19.479 } 00:30:19.479 Got JSON-RPC error response 00:30:19.479 response: 00:30:19.479 { 00:30:19.479 "code": -114, 00:30:19.479 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:19.479 } 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.479 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.480 request: 00:30:19.480 { 00:30:19.480 "name": "NVMe0", 00:30:19.480 "trtype": "tcp", 00:30:19.480 "traddr": "10.0.0.2", 00:30:19.480 "adrfam": "ipv4", 00:30:19.480 "trsvcid": "4420", 00:30:19.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.480 "hostaddr": "10.0.0.2", 00:30:19.480 "hostsvcid": "60000", 00:30:19.480 "prchk_reftag": false, 00:30:19.480 "prchk_guard": false, 00:30:19.480 "hdgst": false, 00:30:19.480 "ddgst": false, 00:30:19.480 "multipath": "failover", 00:30:19.480 "method": "bdev_nvme_attach_controller", 00:30:19.480 "req_id": 1 00:30:19.480 } 00:30:19.480 Got JSON-RPC error response 00:30:19.480 response: 00:30:19.480 { 00:30:19.480 "code": -114, 00:30:19.480 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:19.480 } 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.480 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.480 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.737 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:19.737 07:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:21.113 0 00:30:21.113 07:57:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:21.113 07:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.113 07:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1177254 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1177254 ']' 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1177254 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1177254 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1177254' 00:30:21.113 killing process with pid 1177254 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1177254 00:30:21.113 07:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1177254 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:22.085 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:30:22.086 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:22.086 [2024-07-15 07:57:09.359753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:22.086 [2024-07-15 07:57:09.359944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177254 ] 00:30:22.086 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.086 [2024-07-15 07:57:09.490290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.086 [2024-07-15 07:57:09.729118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.086 [2024-07-15 07:57:10.855713] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 92c0ad32-925b-41d6-afd0-7011cf5b9f44 already exists 00:30:22.086 [2024-07-15 07:57:10.855776] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:92c0ad32-925b-41d6-afd0-7011cf5b9f44 alias for bdev NVMe1n1 00:30:22.086 [2024-07-15 07:57:10.855800] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:22.086 Running I/O for 1 seconds... 00:30:22.086 00:30:22.086 Latency(us) 00:30:22.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.086 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:22.086 NVMe0n1 : 1.01 12635.06 49.36 0.00 0.00 10112.70 2779.21 19320.98 00:30:22.086 =================================================================================================================== 00:30:22.086 Total : 12635.06 49.36 0.00 0.00 10112.70 2779.21 19320.98 00:30:22.086 Received shutdown signal, test time was about 1.000000 seconds 00:30:22.086 00:30:22.086 Latency(us) 00:30:22.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.086 =================================================================================================================== 00:30:22.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.086 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:22.086 rmmod nvme_tcp 00:30:22.086 rmmod nvme_fabrics 00:30:22.086 rmmod nvme_keyring 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1176910 ']' 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1176910 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1176910 ']' 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1176910 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1176910 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1176910' 00:30:22.086 killing process with pid 1176910 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1176910 00:30:22.086 07:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1176910 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.462 07:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.000 07:57:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.000 00:30:26.000 real 0m10.808s 00:30:26.000 user 0m22.472s 00:30:26.000 sys 0m2.530s 00:30:26.000 07:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:26.000 07:57:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:26.000 ************************************ 00:30:26.000 END TEST nvmf_multicontroller 00:30:26.000 ************************************ 00:30:26.000 07:57:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:26.000 07:57:16 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:26.000 07:57:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:26.000 07:57:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.000 07:57:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:26.000 ************************************ 00:30:26.000 START TEST nvmf_aer 00:30:26.000 ************************************ 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:26.000 * Looking for test storage... 00:30:26.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.000 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.001 07:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:27.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:27.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:27.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:27.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.901 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:30:27.902 00:30:27.902 --- 10.0.0.2 ping statistics --- 00:30:27.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.902 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:30:27.902 00:30:27.902 --- 10.0.0.1 ping statistics --- 00:30:27.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.902 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1179742 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1179742 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1179742 ']' 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.902 07:57:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.902 [2024-07-15 07:57:18.915190] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:27.902 [2024-07-15 07:57:18.915320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.902 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.902 [2024-07-15 07:57:19.046385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.161 [2024-07-15 07:57:19.272790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.161 [2024-07-15 07:57:19.272875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.161 [2024-07-15 07:57:19.272907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.161 [2024-07-15 07:57:19.272948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.161 [2024-07-15 07:57:19.272966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.161 [2024-07-15 07:57:19.273078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.161 [2024-07-15 07:57:19.273133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.161 [2024-07-15 07:57:19.273174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.161 [2024-07-15 07:57:19.273184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.729 [2024-07-15 07:57:19.857683] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.729 Malloc0 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.729 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.989 [2024-07-15 07:57:19.965147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.989 [ 00:30:28.989 { 00:30:28.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:28.989 "subtype": "Discovery", 00:30:28.989 "listen_addresses": [], 00:30:28.989 "allow_any_host": true, 00:30:28.989 "hosts": [] 00:30:28.989 }, 00:30:28.989 { 00:30:28.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.989 "subtype": "NVMe", 00:30:28.989 "listen_addresses": [ 00:30:28.989 { 00:30:28.989 "trtype": "TCP", 00:30:28.989 "adrfam": "IPv4", 00:30:28.989 "traddr": "10.0.0.2", 00:30:28.989 "trsvcid": "4420" 00:30:28.989 } 00:30:28.989 ], 00:30:28.989 "allow_any_host": true, 00:30:28.989 "hosts": [], 00:30:28.989 "serial_number": "SPDK00000000000001", 00:30:28.989 "model_number": "SPDK bdev Controller", 00:30:28.989 "max_namespaces": 2, 00:30:28.989 "min_cntlid": 1, 00:30:28.989 "max_cntlid": 65519, 00:30:28.989 "namespaces": [ 00:30:28.989 { 00:30:28.989 "nsid": 1, 00:30:28.989 "bdev_name": "Malloc0", 00:30:28.989 "name": "Malloc0", 00:30:28.989 "nguid": "0CC97BBFD8B8455485049A8835A43D94", 00:30:28.989 "uuid": "0cc97bbf-d8b8-4554-8504-9a8835a43d94" 00:30:28.989 } 00:30:28.989 ] 00:30:28.989 } 00:30:28.989 ] 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1179898 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:28.989 07:57:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:28.989 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:28.989 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.249 Malloc1 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.249 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.508 [ 00:30:29.508 { 00:30:29.508 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:29.509 "subtype": "Discovery", 00:30:29.509 "listen_addresses": [], 00:30:29.509 "allow_any_host": true, 00:30:29.509 "hosts": [] 00:30:29.509 }, 00:30:29.509 { 00:30:29.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.509 "subtype": "NVMe", 00:30:29.509 "listen_addresses": [ 00:30:29.509 { 00:30:29.509 "trtype": "TCP", 00:30:29.509 "adrfam": "IPv4", 00:30:29.509 "traddr": "10.0.0.2", 00:30:29.509 "trsvcid": "4420" 00:30:29.509 } 00:30:29.509 ], 00:30:29.509 "allow_any_host": true, 00:30:29.509 "hosts": [], 00:30:29.509 "serial_number": "SPDK00000000000001", 00:30:29.509 "model_number": "SPDK bdev Controller", 00:30:29.509 "max_namespaces": 2, 00:30:29.509 "min_cntlid": 1, 00:30:29.509 "max_cntlid": 65519, 00:30:29.509 "namespaces": [ 00:30:29.509 { 00:30:29.509 "nsid": 1, 00:30:29.509 "bdev_name": "Malloc0", 00:30:29.509 "name": "Malloc0", 00:30:29.509 "nguid": "0CC97BBFD8B8455485049A8835A43D94", 00:30:29.509 "uuid": "0cc97bbf-d8b8-4554-8504-9a8835a43d94" 00:30:29.509 }, 00:30:29.509 { 00:30:29.509 "nsid": 2, 00:30:29.509 "bdev_name": "Malloc1", 00:30:29.509 "name": "Malloc1", 00:30:29.509 "nguid": "706A2CED74074747AB94AFBD71BBA195", 00:30:29.509 "uuid": "706a2ced-7407-4747-ab94-afbd71bba195" 00:30:29.509 } 00:30:29.509 ] 00:30:29.509 } 00:30:29.509 ] 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1179898 00:30:29.509 Asynchronous Event Request test 00:30:29.509 Attaching to 10.0.0.2 00:30:29.509 Attached to 10.0.0.2 00:30:29.509 Registering asynchronous event callbacks... 00:30:29.509 Starting namespace attribute notice tests for all controllers... 00:30:29.509 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:29.509 aer_cb - Changed Namespace 00:30:29.509 Cleaning up... 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.509 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:29.767 rmmod nvme_tcp 00:30:29.767 rmmod nvme_fabrics 00:30:29.767 rmmod nvme_keyring 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1179742 ']' 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1179742 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1179742 ']' 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1179742 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1179742 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1179742' 00:30:29.767 killing process with pid 1179742 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1179742 00:30:29.767 07:57:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1179742 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.174 07:57:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.079 07:57:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:33.079 00:30:33.079 real 0m7.524s 00:30:33.079 user 0m11.048s 00:30:33.079 sys 0m2.093s 00:30:33.079 07:57:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:33.079 07:57:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:33.079 ************************************ 00:30:33.079 END TEST nvmf_aer 00:30:33.079 ************************************ 00:30:33.338 07:57:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:33.338 07:57:24 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:33.338 07:57:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:33.338 07:57:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:33.338 07:57:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.338 ************************************ 00:30:33.338 START TEST nvmf_async_init 00:30:33.338 ************************************ 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:33.338 * Looking for test storage... 00:30:33.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:33.338 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9475b948d9b94c089fd2467cc64027b2 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:33.339 07:57:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:35.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:35.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:35.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:35.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.244 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:35.245 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:35.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:30:35.504 00:30:35.504 --- 10.0.0.2 ping statistics --- 00:30:35.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.504 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:30:35.504 00:30:35.504 --- 10.0.0.1 ping statistics --- 00:30:35.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.504 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1182082 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1182082 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1182082 ']' 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.504 07:57:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.504 [2024-07-15 07:57:26.639576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:35.504 [2024-07-15 07:57:26.639710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.504 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.763 [2024-07-15 07:57:26.776962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.023 [2024-07-15 07:57:27.033308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.023 [2024-07-15 07:57:27.033385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.023 [2024-07-15 07:57:27.033413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.023 [2024-07-15 07:57:27.033437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.023 [2024-07-15 07:57:27.033459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.023 [2024-07-15 07:57:27.033514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 [2024-07-15 07:57:27.569270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 null0 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9475b948d9b94c089fd2467cc64027b2 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.592 [2024-07-15 07:57:27.609541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.592 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.851 nvme0n1 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.851 [ 00:30:36.851 { 00:30:36.851 "name": "nvme0n1", 00:30:36.851 "aliases": [ 00:30:36.851 "9475b948-d9b9-4c08-9fd2-467cc64027b2" 00:30:36.851 ], 00:30:36.851 "product_name": "NVMe disk", 00:30:36.851 "block_size": 512, 00:30:36.851 "num_blocks": 2097152, 00:30:36.851 "uuid": "9475b948-d9b9-4c08-9fd2-467cc64027b2", 00:30:36.851 "assigned_rate_limits": { 00:30:36.851 "rw_ios_per_sec": 0, 00:30:36.851 "rw_mbytes_per_sec": 0, 00:30:36.851 "r_mbytes_per_sec": 0, 00:30:36.851 "w_mbytes_per_sec": 0 00:30:36.851 }, 00:30:36.851 "claimed": false, 00:30:36.851 "zoned": false, 00:30:36.851 "supported_io_types": { 00:30:36.851 "read": true, 00:30:36.851 "write": true, 00:30:36.851 "unmap": false, 00:30:36.851 "flush": true, 00:30:36.851 "reset": true, 00:30:36.851 "nvme_admin": true, 00:30:36.851 "nvme_io": true, 00:30:36.851 "nvme_io_md": false, 00:30:36.851 "write_zeroes": true, 00:30:36.851 "zcopy": false, 00:30:36.851 "get_zone_info": false, 00:30:36.851 "zone_management": false, 00:30:36.851 "zone_append": false, 00:30:36.851 "compare": true, 00:30:36.851 "compare_and_write": true, 00:30:36.851 "abort": true, 00:30:36.851 "seek_hole": false, 00:30:36.851 "seek_data": false, 00:30:36.851 "copy": true, 00:30:36.851 "nvme_iov_md": false 00:30:36.851 }, 00:30:36.851 "memory_domains": [ 00:30:36.851 { 00:30:36.851 "dma_device_id": "system", 00:30:36.851 "dma_device_type": 1 00:30:36.851 } 00:30:36.851 ], 00:30:36.851 "driver_specific": { 00:30:36.851 "nvme": [ 00:30:36.851 { 00:30:36.851 "trid": { 00:30:36.851 "trtype": "TCP", 00:30:36.851 "adrfam": "IPv4", 00:30:36.851 "traddr": "10.0.0.2", 00:30:36.851 "trsvcid": "4420", 00:30:36.851 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.851 }, 00:30:36.851 "ctrlr_data": { 00:30:36.851 "cntlid": 1, 00:30:36.851 "vendor_id": "0x8086", 00:30:36.851 "model_number": "SPDK bdev Controller", 00:30:36.851 "serial_number": "00000000000000000000", 00:30:36.851 "firmware_revision": "24.09", 00:30:36.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.851 "oacs": { 00:30:36.851 "security": 0, 00:30:36.851 "format": 0, 00:30:36.851 "firmware": 0, 00:30:36.851 "ns_manage": 0 00:30:36.851 }, 00:30:36.851 "multi_ctrlr": true, 00:30:36.851 "ana_reporting": false 00:30:36.851 }, 00:30:36.851 "vs": { 00:30:36.851 "nvme_version": "1.3" 00:30:36.851 }, 00:30:36.851 "ns_data": { 00:30:36.851 "id": 1, 00:30:36.851 "can_share": true 00:30:36.851 } 00:30:36.851 } 00:30:36.851 ], 00:30:36.851 "mp_policy": "active_passive" 00:30:36.851 } 00:30:36.851 } 00:30:36.851 ] 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.851 07:57:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.851 [2024-07-15 07:57:27.866182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:36.851 [2024-07-15 07:57:27.866311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:30:36.852 [2024-07-15 07:57:27.999114] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.852 [ 00:30:36.852 { 00:30:36.852 "name": "nvme0n1", 00:30:36.852 "aliases": [ 00:30:36.852 "9475b948-d9b9-4c08-9fd2-467cc64027b2" 00:30:36.852 ], 00:30:36.852 "product_name": "NVMe disk", 00:30:36.852 "block_size": 512, 00:30:36.852 "num_blocks": 2097152, 00:30:36.852 "uuid": "9475b948-d9b9-4c08-9fd2-467cc64027b2", 00:30:36.852 "assigned_rate_limits": { 00:30:36.852 "rw_ios_per_sec": 0, 00:30:36.852 "rw_mbytes_per_sec": 0, 00:30:36.852 "r_mbytes_per_sec": 0, 00:30:36.852 "w_mbytes_per_sec": 0 00:30:36.852 }, 00:30:36.852 "claimed": false, 00:30:36.852 "zoned": false, 00:30:36.852 "supported_io_types": { 00:30:36.852 "read": true, 00:30:36.852 "write": true, 00:30:36.852 "unmap": false, 00:30:36.852 "flush": true, 00:30:36.852 "reset": true, 00:30:36.852 "nvme_admin": true, 00:30:36.852 "nvme_io": true, 00:30:36.852 "nvme_io_md": false, 00:30:36.852 "write_zeroes": true, 00:30:36.852 "zcopy": false, 00:30:36.852 "get_zone_info": false, 00:30:36.852 "zone_management": false, 00:30:36.852 "zone_append": false, 00:30:36.852 "compare": true, 00:30:36.852 "compare_and_write": true, 00:30:36.852 "abort": true, 00:30:36.852 "seek_hole": false, 00:30:36.852 "seek_data": false, 00:30:36.852 "copy": true, 00:30:36.852 "nvme_iov_md": false 00:30:36.852 }, 00:30:36.852 "memory_domains": [ 00:30:36.852 { 00:30:36.852 "dma_device_id": "system", 00:30:36.852 "dma_device_type": 1 00:30:36.852 } 00:30:36.852 ], 00:30:36.852 "driver_specific": { 00:30:36.852 "nvme": [ 00:30:36.852 { 00:30:36.852 "trid": { 00:30:36.852 "trtype": "TCP", 00:30:36.852 "adrfam": "IPv4", 00:30:36.852 "traddr": "10.0.0.2", 00:30:36.852 "trsvcid": "4420", 00:30:36.852 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.852 }, 00:30:36.852 "ctrlr_data": { 00:30:36.852 "cntlid": 2, 00:30:36.852 "vendor_id": "0x8086", 00:30:36.852 "model_number": "SPDK bdev Controller", 00:30:36.852 "serial_number": "00000000000000000000", 00:30:36.852 "firmware_revision": "24.09", 00:30:36.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.852 "oacs": { 00:30:36.852 "security": 0, 00:30:36.852 "format": 0, 00:30:36.852 "firmware": 0, 00:30:36.852 "ns_manage": 0 00:30:36.852 }, 00:30:36.852 "multi_ctrlr": true, 00:30:36.852 "ana_reporting": false 00:30:36.852 }, 00:30:36.852 "vs": { 00:30:36.852 "nvme_version": "1.3" 00:30:36.852 }, 00:30:36.852 "ns_data": { 00:30:36.852 "id": 1, 00:30:36.852 "can_share": true 00:30:36.852 } 00:30:36.852 } 00:30:36.852 ], 00:30:36.852 "mp_policy": "active_passive" 00:30:36.852 } 00:30:36.852 } 00:30:36.852 ] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.nVRpgvzH1Z 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.nVRpgvzH1Z 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.852 [2024-07-15 07:57:28.050913] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:36.852 [2024-07-15 07:57:28.051105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nVRpgvzH1Z 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.852 [2024-07-15 07:57:28.058904] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nVRpgvzH1Z 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.852 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.852 [2024-07-15 07:57:28.066938] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:36.852 [2024-07-15 07:57:28.067055] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:37.111 nvme0n1 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.111 [ 00:30:37.111 { 00:30:37.111 "name": "nvme0n1", 00:30:37.111 "aliases": [ 00:30:37.111 "9475b948-d9b9-4c08-9fd2-467cc64027b2" 00:30:37.111 ], 00:30:37.111 "product_name": "NVMe disk", 00:30:37.111 "block_size": 512, 00:30:37.111 "num_blocks": 2097152, 00:30:37.111 "uuid": "9475b948-d9b9-4c08-9fd2-467cc64027b2", 00:30:37.111 "assigned_rate_limits": { 00:30:37.111 "rw_ios_per_sec": 0, 00:30:37.111 "rw_mbytes_per_sec": 0, 00:30:37.111 "r_mbytes_per_sec": 0, 00:30:37.111 "w_mbytes_per_sec": 0 00:30:37.111 }, 00:30:37.111 "claimed": false, 00:30:37.111 "zoned": false, 00:30:37.111 "supported_io_types": { 00:30:37.111 "read": true, 00:30:37.111 "write": true, 00:30:37.111 "unmap": false, 00:30:37.111 "flush": true, 00:30:37.111 "reset": true, 00:30:37.111 "nvme_admin": true, 00:30:37.111 "nvme_io": true, 00:30:37.111 "nvme_io_md": false, 00:30:37.111 "write_zeroes": true, 00:30:37.111 "zcopy": false, 00:30:37.111 "get_zone_info": false, 00:30:37.111 "zone_management": false, 00:30:37.111 "zone_append": false, 00:30:37.111 "compare": true, 00:30:37.111 "compare_and_write": true, 00:30:37.111 "abort": true, 00:30:37.111 "seek_hole": false, 00:30:37.111 "seek_data": false, 00:30:37.111 "copy": true, 00:30:37.111 "nvme_iov_md": false 00:30:37.111 }, 00:30:37.111 "memory_domains": [ 00:30:37.111 { 00:30:37.111 "dma_device_id": "system", 00:30:37.111 "dma_device_type": 1 00:30:37.111 } 00:30:37.111 ], 00:30:37.111 "driver_specific": { 00:30:37.111 "nvme": [ 00:30:37.111 { 00:30:37.111 "trid": { 00:30:37.111 "trtype": "TCP", 00:30:37.111 "adrfam": "IPv4", 00:30:37.111 "traddr": "10.0.0.2", 00:30:37.111 "trsvcid": "4421", 00:30:37.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:37.111 }, 00:30:37.111 "ctrlr_data": { 00:30:37.111 "cntlid": 3, 00:30:37.111 "vendor_id": "0x8086", 00:30:37.111 "model_number": "SPDK bdev Controller", 00:30:37.111 "serial_number": "00000000000000000000", 00:30:37.111 "firmware_revision": "24.09", 00:30:37.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:37.111 "oacs": { 00:30:37.111 "security": 0, 00:30:37.111 "format": 0, 00:30:37.111 "firmware": 0, 00:30:37.111 "ns_manage": 0 00:30:37.111 }, 00:30:37.111 "multi_ctrlr": true, 00:30:37.111 "ana_reporting": false 00:30:37.111 }, 00:30:37.111 "vs": { 00:30:37.111 "nvme_version": "1.3" 00:30:37.111 }, 00:30:37.111 "ns_data": { 00:30:37.111 "id": 1, 00:30:37.111 "can_share": true 00:30:37.111 } 00:30:37.111 } 00:30:37.111 ], 00:30:37.111 "mp_policy": "active_passive" 00:30:37.111 } 00:30:37.111 } 00:30:37.111 ] 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.nVRpgvzH1Z 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.111 rmmod nvme_tcp 00:30:37.111 rmmod nvme_fabrics 00:30:37.111 rmmod nvme_keyring 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1182082 ']' 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1182082 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1182082 ']' 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1182082 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1182082 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1182082' 00:30:37.111 killing process with pid 1182082 00:30:37.111 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1182082 00:30:37.111 [2024-07-15 07:57:28.271179] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:37.111 [2024-07-15 07:57:28.271245] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 07:57:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1182082 00:30:37.111 removal in v24.09 hit 1 times 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.490 07:57:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.050 07:57:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:41.050 00:30:41.050 real 0m7.298s 00:30:41.050 user 0m3.876s 00:30:41.050 sys 0m2.019s 00:30:41.050 07:57:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:41.050 07:57:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:41.050 ************************************ 00:30:41.050 END TEST nvmf_async_init 00:30:41.050 ************************************ 00:30:41.050 07:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:41.050 07:57:31 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:41.050 07:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:41.050 07:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.050 07:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.050 ************************************ 00:30:41.050 START TEST dma 00:30:41.050 ************************************ 00:30:41.050 07:57:31 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:41.050 * Looking for test storage... 00:30:41.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.050 07:57:31 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.050 07:57:31 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.050 07:57:31 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.050 07:57:31 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.050 07:57:31 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.050 07:57:31 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.050 07:57:31 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.050 07:57:31 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:41.050 07:57:31 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.050 07:57:31 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.050 07:57:31 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:41.050 07:57:31 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:41.050 00:30:41.050 real 0m0.060s 00:30:41.050 user 0m0.024s 00:30:41.050 sys 0m0.041s 00:30:41.050 07:57:31 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:41.051 07:57:31 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:41.051 ************************************ 00:30:41.051 END TEST dma 00:30:41.051 ************************************ 00:30:41.051 07:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:41.051 07:57:31 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:41.051 07:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:41.051 07:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.051 07:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.051 ************************************ 00:30:41.051 START TEST nvmf_identify 00:30:41.051 ************************************ 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:41.051 * Looking for test storage... 00:30:41.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:41.051 07:57:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.434 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:42.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:42.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:42.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.692 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:42.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:30:42.693 00:30:42.693 --- 10.0.0.2 ping statistics --- 00:30:42.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.693 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:30:42.693 00:30:42.693 --- 10.0.0.1 ping statistics --- 00:30:42.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.693 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1184355 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1184355 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1184355 ']' 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:42.693 07:57:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.952 [2024-07-15 07:57:33.922031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:42.952 [2024-07-15 07:57:33.922195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.952 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.952 [2024-07-15 07:57:34.060095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.210 [2024-07-15 07:57:34.314400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.210 [2024-07-15 07:57:34.314469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.210 [2024-07-15 07:57:34.314496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.210 [2024-07-15 07:57:34.314516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.210 [2024-07-15 07:57:34.314536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.210 [2024-07-15 07:57:34.314680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.210 [2024-07-15 07:57:34.314749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.210 [2024-07-15 07:57:34.314832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.210 [2024-07-15 07:57:34.314842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 [2024-07-15 07:57:34.837247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 Malloc0 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 [2024-07-15 07:57:34.967014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 [ 00:30:43.778 { 00:30:43.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.778 "subtype": "Discovery", 00:30:43.778 "listen_addresses": [ 00:30:43.778 { 00:30:43.778 "trtype": "TCP", 00:30:43.778 "adrfam": "IPv4", 00:30:43.778 "traddr": "10.0.0.2", 00:30:43.778 "trsvcid": "4420" 00:30:43.778 } 00:30:43.778 ], 00:30:43.778 "allow_any_host": true, 00:30:43.778 "hosts": [] 00:30:43.778 }, 00:30:43.778 { 00:30:43.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.778 "subtype": "NVMe", 00:30:43.778 "listen_addresses": [ 00:30:43.778 { 00:30:43.778 "trtype": "TCP", 00:30:43.778 "adrfam": "IPv4", 00:30:43.778 "traddr": "10.0.0.2", 00:30:43.778 "trsvcid": "4420" 00:30:43.778 } 00:30:43.778 ], 00:30:43.778 "allow_any_host": true, 00:30:43.778 "hosts": [], 00:30:43.778 "serial_number": "SPDK00000000000001", 00:30:43.778 "model_number": "SPDK bdev Controller", 00:30:43.778 "max_namespaces": 32, 00:30:43.778 "min_cntlid": 1, 00:30:43.778 "max_cntlid": 65519, 00:30:43.778 "namespaces": [ 00:30:43.778 { 00:30:43.778 "nsid": 1, 00:30:43.778 "bdev_name": "Malloc0", 00:30:43.778 "name": "Malloc0", 00:30:43.778 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:43.778 "eui64": "ABCDEF0123456789", 00:30:43.778 "uuid": "22e084e1-822f-400f-b42b-ed1cf1eafc8b" 00:30:43.778 } 00:30:43.778 ] 00:30:43.778 } 00:30:43.778 ] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.778 07:57:34 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:44.040 [2024-07-15 07:57:35.034266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:44.041 [2024-07-15 07:57:35.034357] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184507 ] 00:30:44.041 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.041 [2024-07-15 07:57:35.091545] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:44.041 [2024-07-15 07:57:35.091662] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:44.041 [2024-07-15 07:57:35.091688] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:44.041 [2024-07-15 07:57:35.091717] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:44.041 [2024-07-15 07:57:35.091741] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:44.041 [2024-07-15 07:57:35.094965] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:44.041 [2024-07-15 07:57:35.095046] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:44.041 [2024-07-15 07:57:35.102901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:44.041 [2024-07-15 07:57:35.102932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:44.041 [2024-07-15 07:57:35.102947] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:44.041 [2024-07-15 07:57:35.102958] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:44.041 [2024-07-15 07:57:35.103029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.103049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.103068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.103098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:44.041 [2024-07-15 07:57:35.103137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.110907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.110942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.110955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.110973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.111013] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:44.041 [2024-07-15 07:57:35.111039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:44.041 [2024-07-15 07:57:35.111056] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:44.041 [2024-07-15 07:57:35.111085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.111136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.041 [2024-07-15 07:57:35.111197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.111443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.111466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.111478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.111506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:44.041 [2024-07-15 07:57:35.111533] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:44.041 [2024-07-15 07:57:35.111554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.111606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.041 [2024-07-15 07:57:35.111640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.111839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.111864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.111885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.111914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:44.041 [2024-07-15 07:57:35.111941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:44.041 [2024-07-15 07:57:35.111966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.111997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.112016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.041 [2024-07-15 07:57:35.112048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.112249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.112272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.112283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.112294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.112310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:44.041 [2024-07-15 07:57:35.112337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.112352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.112364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.112388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.041 [2024-07-15 07:57:35.112421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.112586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.112612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.112625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.112636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.112650] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:44.041 [2024-07-15 07:57:35.112670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:44.041 [2024-07-15 07:57:35.112700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:44.041 [2024-07-15 07:57:35.112818] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:44.041 [2024-07-15 07:57:35.112847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:44.041 [2024-07-15 07:57:35.112870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.112892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.112920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.112950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.041 [2024-07-15 07:57:35.113002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.113253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.113274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.113285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.113297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.113311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:44.041 [2024-07-15 07:57:35.113347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.113364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.113376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.041 [2024-07-15 07:57:35.113395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.041 [2024-07-15 07:57:35.113427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.041 [2024-07-15 07:57:35.113625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.041 [2024-07-15 07:57:35.113647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.041 [2024-07-15 07:57:35.113658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.041 [2024-07-15 07:57:35.113669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.041 [2024-07-15 07:57:35.113683] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:44.041 [2024-07-15 07:57:35.113710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:44.041 [2024-07-15 07:57:35.113734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:44.041 [2024-07-15 07:57:35.113760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:44.042 [2024-07-15 07:57:35.113791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.113809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.113830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.042 [2024-07-15 07:57:35.113862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.042 [2024-07-15 07:57:35.114109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.042 [2024-07-15 07:57:35.114131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.042 [2024-07-15 07:57:35.114143] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.114155] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:44.042 [2024-07-15 07:57:35.114169] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.042 [2024-07-15 07:57:35.114182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.114226] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.114242] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.042 [2024-07-15 07:57:35.155084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.042 [2024-07-15 07:57:35.155098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.042 [2024-07-15 07:57:35.155140] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:44.042 [2024-07-15 07:57:35.155157] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:44.042 [2024-07-15 07:57:35.155184] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:44.042 [2024-07-15 07:57:35.155198] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:44.042 [2024-07-15 07:57:35.155216] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:44.042 [2024-07-15 07:57:35.155234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:44.042 [2024-07-15 07:57:35.155257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:44.042 [2024-07-15 07:57:35.155284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.155353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.042 [2024-07-15 07:57:35.155390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.042 [2024-07-15 07:57:35.155587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.042 [2024-07-15 07:57:35.155610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.042 [2024-07-15 07:57:35.155621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.042 [2024-07-15 07:57:35.155652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.155703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.042 [2024-07-15 07:57:35.155721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.155759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.042 [2024-07-15 07:57:35.155775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.155813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.042 [2024-07-15 07:57:35.155844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.155874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.155904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.042 [2024-07-15 07:57:35.155920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:44.042 [2024-07-15 07:57:35.155972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:44.042 [2024-07-15 07:57:35.155997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.156030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.042 [2024-07-15 07:57:35.156070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.042 [2024-07-15 07:57:35.156088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:44.042 [2024-07-15 07:57:35.156101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:44.042 [2024-07-15 07:57:35.156114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.042 [2024-07-15 07:57:35.156126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.042 [2024-07-15 07:57:35.156340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.042 [2024-07-15 07:57:35.156362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.042 [2024-07-15 07:57:35.156374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.042 [2024-07-15 07:57:35.156400] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:44.042 [2024-07-15 07:57:35.156416] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:44.042 [2024-07-15 07:57:35.156448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.156484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.042 [2024-07-15 07:57:35.156516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.042 [2024-07-15 07:57:35.156728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.042 [2024-07-15 07:57:35.156751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.042 [2024-07-15 07:57:35.156763] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156775] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.042 [2024-07-15 07:57:35.156795] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.042 [2024-07-15 07:57:35.156808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156826] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156840] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.042 [2024-07-15 07:57:35.156884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.042 [2024-07-15 07:57:35.156901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.156918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.042 [2024-07-15 07:57:35.156961] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:44.042 [2024-07-15 07:57:35.157022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.157068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.042 [2024-07-15 07:57:35.157088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.042 [2024-07-15 07:57:35.157129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.042 [2024-07-15 07:57:35.157162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.042 [2024-07-15 07:57:35.157186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.042 [2024-07-15 07:57:35.157576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.042 [2024-07-15 07:57:35.157599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.042 [2024-07-15 07:57:35.157611] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157623] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:44.042 [2024-07-15 07:57:35.157635] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:44.042 [2024-07-15 07:57:35.157654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157672] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157685] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.042 [2024-07-15 07:57:35.157721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.042 [2024-07-15 07:57:35.157732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.157743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.042 [2024-07-15 07:57:35.198046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.042 [2024-07-15 07:57:35.198075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.042 [2024-07-15 07:57:35.198087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.042 [2024-07-15 07:57:35.198099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.042 [2024-07-15 07:57:35.198136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.198153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.043 [2024-07-15 07:57:35.198175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.043 [2024-07-15 07:57:35.198225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.043 [2024-07-15 07:57:35.198414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.043 [2024-07-15 07:57:35.198436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.043 [2024-07-15 07:57:35.198448] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.198459] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:44.043 [2024-07-15 07:57:35.198476] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:44.043 [2024-07-15 07:57:35.198488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.198521] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.198537] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.239037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.043 [2024-07-15 07:57:35.239064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.043 [2024-07-15 07:57:35.239076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.239088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.043 [2024-07-15 07:57:35.239118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.239134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.043 [2024-07-15 07:57:35.239164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.043 [2024-07-15 07:57:35.239221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.043 [2024-07-15 07:57:35.239408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.043 [2024-07-15 07:57:35.239430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.043 [2024-07-15 07:57:35.239442] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.239453] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:44.043 [2024-07-15 07:57:35.239464] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:44.043 [2024-07-15 07:57:35.239476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.239501] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.043 [2024-07-15 07:57:35.239515] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.310 [2024-07-15 07:57:35.283924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.310 [2024-07-15 07:57:35.283968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.310 [2024-07-15 07:57:35.283981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.310 [2024-07-15 07:57:35.283993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.310 ===================================================== 00:30:44.310 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:44.310 ===================================================== 00:30:44.310 Controller Capabilities/Features 00:30:44.310 ================================ 00:30:44.310 Vendor ID: 0000 00:30:44.310 Subsystem Vendor ID: 0000 00:30:44.310 Serial Number: .................... 00:30:44.310 Model Number: ........................................ 00:30:44.310 Firmware Version: 24.09 00:30:44.310 Recommended Arb Burst: 0 00:30:44.310 IEEE OUI Identifier: 00 00 00 00:30:44.310 Multi-path I/O 00:30:44.310 May have multiple subsystem ports: No 00:30:44.310 May have multiple controllers: No 00:30:44.310 Associated with SR-IOV VF: No 00:30:44.310 Max Data Transfer Size: 131072 00:30:44.310 Max Number of Namespaces: 0 00:30:44.310 Max Number of I/O Queues: 1024 00:30:44.310 NVMe Specification Version (VS): 1.3 00:30:44.310 NVMe Specification Version (Identify): 1.3 00:30:44.310 Maximum Queue Entries: 128 00:30:44.310 Contiguous Queues Required: Yes 00:30:44.310 Arbitration Mechanisms Supported 00:30:44.310 Weighted Round Robin: Not Supported 00:30:44.310 Vendor Specific: Not Supported 00:30:44.310 Reset Timeout: 15000 ms 00:30:44.310 Doorbell Stride: 4 bytes 00:30:44.310 NVM Subsystem Reset: Not Supported 00:30:44.310 Command Sets Supported 00:30:44.310 NVM Command Set: Supported 00:30:44.310 Boot Partition: Not Supported 00:30:44.310 Memory Page Size Minimum: 4096 bytes 00:30:44.310 Memory Page Size Maximum: 4096 bytes 00:30:44.310 Persistent Memory Region: Not Supported 00:30:44.310 Optional Asynchronous Events Supported 00:30:44.310 Namespace Attribute Notices: Not Supported 00:30:44.310 Firmware Activation Notices: Not Supported 00:30:44.310 ANA Change Notices: Not Supported 00:30:44.310 PLE Aggregate Log Change Notices: Not Supported 00:30:44.310 LBA Status Info Alert Notices: Not Supported 00:30:44.310 EGE Aggregate Log Change Notices: Not Supported 00:30:44.310 Normal NVM Subsystem Shutdown event: Not Supported 00:30:44.310 Zone Descriptor Change Notices: Not Supported 00:30:44.310 Discovery Log Change Notices: Supported 00:30:44.310 Controller Attributes 00:30:44.310 128-bit Host Identifier: Not Supported 00:30:44.310 Non-Operational Permissive Mode: Not Supported 00:30:44.310 NVM Sets: Not Supported 00:30:44.310 Read Recovery Levels: Not Supported 00:30:44.310 Endurance Groups: Not Supported 00:30:44.310 Predictable Latency Mode: Not Supported 00:30:44.310 Traffic Based Keep ALive: Not Supported 00:30:44.310 Namespace Granularity: Not Supported 00:30:44.310 SQ Associations: Not Supported 00:30:44.310 UUID List: Not Supported 00:30:44.310 Multi-Domain Subsystem: Not Supported 00:30:44.310 Fixed Capacity Management: Not Supported 00:30:44.310 Variable Capacity Management: Not Supported 00:30:44.310 Delete Endurance Group: Not Supported 00:30:44.310 Delete NVM Set: Not Supported 00:30:44.310 Extended LBA Formats Supported: Not Supported 00:30:44.310 Flexible Data Placement Supported: Not Supported 00:30:44.310 00:30:44.310 Controller Memory Buffer Support 00:30:44.310 ================================ 00:30:44.310 Supported: No 00:30:44.310 00:30:44.310 Persistent Memory Region Support 00:30:44.310 ================================ 00:30:44.310 Supported: No 00:30:44.310 00:30:44.310 Admin Command Set Attributes 00:30:44.310 ============================ 00:30:44.310 Security Send/Receive: Not Supported 00:30:44.310 Format NVM: Not Supported 00:30:44.311 Firmware Activate/Download: Not Supported 00:30:44.311 Namespace Management: Not Supported 00:30:44.311 Device Self-Test: Not Supported 00:30:44.311 Directives: Not Supported 00:30:44.311 NVMe-MI: Not Supported 00:30:44.311 Virtualization Management: Not Supported 00:30:44.311 Doorbell Buffer Config: Not Supported 00:30:44.311 Get LBA Status Capability: Not Supported 00:30:44.311 Command & Feature Lockdown Capability: Not Supported 00:30:44.311 Abort Command Limit: 1 00:30:44.311 Async Event Request Limit: 4 00:30:44.311 Number of Firmware Slots: N/A 00:30:44.311 Firmware Slot 1 Read-Only: N/A 00:30:44.311 Firmware Activation Without Reset: N/A 00:30:44.311 Multiple Update Detection Support: N/A 00:30:44.311 Firmware Update Granularity: No Information Provided 00:30:44.311 Per-Namespace SMART Log: No 00:30:44.311 Asymmetric Namespace Access Log Page: Not Supported 00:30:44.311 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:44.311 Command Effects Log Page: Not Supported 00:30:44.311 Get Log Page Extended Data: Supported 00:30:44.311 Telemetry Log Pages: Not Supported 00:30:44.311 Persistent Event Log Pages: Not Supported 00:30:44.311 Supported Log Pages Log Page: May Support 00:30:44.311 Commands Supported & Effects Log Page: Not Supported 00:30:44.311 Feature Identifiers & Effects Log Page:May Support 00:30:44.311 NVMe-MI Commands & Effects Log Page: May Support 00:30:44.311 Data Area 4 for Telemetry Log: Not Supported 00:30:44.311 Error Log Page Entries Supported: 128 00:30:44.311 Keep Alive: Not Supported 00:30:44.311 00:30:44.311 NVM Command Set Attributes 00:30:44.311 ========================== 00:30:44.311 Submission Queue Entry Size 00:30:44.311 Max: 1 00:30:44.311 Min: 1 00:30:44.311 Completion Queue Entry Size 00:30:44.311 Max: 1 00:30:44.311 Min: 1 00:30:44.311 Number of Namespaces: 0 00:30:44.311 Compare Command: Not Supported 00:30:44.311 Write Uncorrectable Command: Not Supported 00:30:44.311 Dataset Management Command: Not Supported 00:30:44.311 Write Zeroes Command: Not Supported 00:30:44.311 Set Features Save Field: Not Supported 00:30:44.311 Reservations: Not Supported 00:30:44.311 Timestamp: Not Supported 00:30:44.311 Copy: Not Supported 00:30:44.311 Volatile Write Cache: Not Present 00:30:44.311 Atomic Write Unit (Normal): 1 00:30:44.311 Atomic Write Unit (PFail): 1 00:30:44.311 Atomic Compare & Write Unit: 1 00:30:44.311 Fused Compare & Write: Supported 00:30:44.311 Scatter-Gather List 00:30:44.311 SGL Command Set: Supported 00:30:44.311 SGL Keyed: Supported 00:30:44.311 SGL Bit Bucket Descriptor: Not Supported 00:30:44.311 SGL Metadata Pointer: Not Supported 00:30:44.311 Oversized SGL: Not Supported 00:30:44.311 SGL Metadata Address: Not Supported 00:30:44.311 SGL Offset: Supported 00:30:44.311 Transport SGL Data Block: Not Supported 00:30:44.311 Replay Protected Memory Block: Not Supported 00:30:44.311 00:30:44.311 Firmware Slot Information 00:30:44.311 ========================= 00:30:44.311 Active slot: 0 00:30:44.311 00:30:44.311 00:30:44.311 Error Log 00:30:44.311 ========= 00:30:44.311 00:30:44.311 Active Namespaces 00:30:44.311 ================= 00:30:44.311 Discovery Log Page 00:30:44.311 ================== 00:30:44.311 Generation Counter: 2 00:30:44.311 Number of Records: 2 00:30:44.311 Record Format: 0 00:30:44.311 00:30:44.311 Discovery Log Entry 0 00:30:44.311 ---------------------- 00:30:44.311 Transport Type: 3 (TCP) 00:30:44.311 Address Family: 1 (IPv4) 00:30:44.311 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:44.311 Entry Flags: 00:30:44.311 Duplicate Returned Information: 1 00:30:44.311 Explicit Persistent Connection Support for Discovery: 1 00:30:44.311 Transport Requirements: 00:30:44.311 Secure Channel: Not Required 00:30:44.311 Port ID: 0 (0x0000) 00:30:44.311 Controller ID: 65535 (0xffff) 00:30:44.311 Admin Max SQ Size: 128 00:30:44.311 Transport Service Identifier: 4420 00:30:44.311 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:44.311 Transport Address: 10.0.0.2 00:30:44.311 Discovery Log Entry 1 00:30:44.311 ---------------------- 00:30:44.311 Transport Type: 3 (TCP) 00:30:44.311 Address Family: 1 (IPv4) 00:30:44.311 Subsystem Type: 2 (NVM Subsystem) 00:30:44.311 Entry Flags: 00:30:44.311 Duplicate Returned Information: 0 00:30:44.311 Explicit Persistent Connection Support for Discovery: 0 00:30:44.311 Transport Requirements: 00:30:44.311 Secure Channel: Not Required 00:30:44.311 Port ID: 0 (0x0000) 00:30:44.311 Controller ID: 65535 (0xffff) 00:30:44.311 Admin Max SQ Size: 128 00:30:44.311 Transport Service Identifier: 4420 00:30:44.311 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:44.311 Transport Address: 10.0.0.2 [2024-07-15 07:57:35.284183] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:44.311 [2024-07-15 07:57:35.284215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.284237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.311 [2024-07-15 07:57:35.284253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.284282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.311 [2024-07-15 07:57:35.284295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.284308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.311 [2024-07-15 07:57:35.284319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.284332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.311 [2024-07-15 07:57:35.284358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.284377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.284389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.311 [2024-07-15 07:57:35.284414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.311 [2024-07-15 07:57:35.284450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.311 [2024-07-15 07:57:35.284634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.311 [2024-07-15 07:57:35.284656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.311 [2024-07-15 07:57:35.284668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.284680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.284701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.284716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.284727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.311 [2024-07-15 07:57:35.284747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.311 [2024-07-15 07:57:35.284794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.311 [2024-07-15 07:57:35.284989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.311 [2024-07-15 07:57:35.285011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.311 [2024-07-15 07:57:35.285022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.285048] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:44.311 [2024-07-15 07:57:35.285062] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:44.311 [2024-07-15 07:57:35.285088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.311 [2024-07-15 07:57:35.285134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.311 [2024-07-15 07:57:35.285167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.311 [2024-07-15 07:57:35.285335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.311 [2024-07-15 07:57:35.285355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.311 [2024-07-15 07:57:35.285366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.285404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.311 [2024-07-15 07:57:35.285448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.311 [2024-07-15 07:57:35.285478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.311 [2024-07-15 07:57:35.285643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.311 [2024-07-15 07:57:35.285663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.311 [2024-07-15 07:57:35.285675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.311 [2024-07-15 07:57:35.285720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.311 [2024-07-15 07:57:35.285735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.285745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.285763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.285793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.285965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.285986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.285997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.286035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.286078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.286108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.286267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.286289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.286301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.286338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.286382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.286412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.286586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.286608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.286619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.286657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.286700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.286730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.286893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.286914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.286925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.286968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.286993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.287016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.287049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.287201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.287221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.287232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.287269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.287312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.287342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.287501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.287521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.287533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.287570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.287613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.287643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.287817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.287838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.287849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.287860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.291911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.291932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.291943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.291961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.292021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.312 [2024-07-15 07:57:35.292183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.292205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.292216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.292231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.292255] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:30:44.312 00:30:44.312 07:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:44.312 [2024-07-15 07:57:35.400609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:44.312 [2024-07-15 07:57:35.400724] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184625 ] 00:30:44.312 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.312 [2024-07-15 07:57:35.466305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:44.312 [2024-07-15 07:57:35.466424] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:44.312 [2024-07-15 07:57:35.466445] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:44.312 [2024-07-15 07:57:35.466480] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:44.312 [2024-07-15 07:57:35.466507] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:44.312 [2024-07-15 07:57:35.469954] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:44.312 [2024-07-15 07:57:35.470054] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:44.312 [2024-07-15 07:57:35.477810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:44.312 [2024-07-15 07:57:35.477856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:44.312 [2024-07-15 07:57:35.477871] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:44.312 [2024-07-15 07:57:35.477892] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:44.312 [2024-07-15 07:57:35.477968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.477990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.478008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.478040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:44.312 [2024-07-15 07:57:35.478079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.312 [2024-07-15 07:57:35.484903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.484931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.484944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.484958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.312 [2024-07-15 07:57:35.484982] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:44.312 [2024-07-15 07:57:35.485028] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:44.312 [2024-07-15 07:57:35.485050] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:44.312 [2024-07-15 07:57:35.485083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.485098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.485118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.312 [2024-07-15 07:57:35.485147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.312 [2024-07-15 07:57:35.485201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.312 [2024-07-15 07:57:35.485376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.312 [2024-07-15 07:57:35.485399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.312 [2024-07-15 07:57:35.485412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.312 [2024-07-15 07:57:35.485424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.485445] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:44.313 [2024-07-15 07:57:35.485467] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:44.313 [2024-07-15 07:57:35.485488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.485502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.485529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.485553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.313 [2024-07-15 07:57:35.485607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.485744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.485768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.485780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.485791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.485806] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:44.313 [2024-07-15 07:57:35.485829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:44.313 [2024-07-15 07:57:35.485854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.485870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.485891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.485912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.313 [2024-07-15 07:57:35.485945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.486075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.486095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.486106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.486132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:44.313 [2024-07-15 07:57:35.486163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.486211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.313 [2024-07-15 07:57:35.486268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.486426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.486447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.486458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.486483] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:44.313 [2024-07-15 07:57:35.486497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:44.313 [2024-07-15 07:57:35.486518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:44.313 [2024-07-15 07:57:35.486635] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:44.313 [2024-07-15 07:57:35.486649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:44.313 [2024-07-15 07:57:35.486678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.486723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.313 [2024-07-15 07:57:35.486755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.486921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.486942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.486954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.486964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.486979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:44.313 [2024-07-15 07:57:35.487012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.487063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.313 [2024-07-15 07:57:35.487096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.487239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.487259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.487270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.487298] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:44.313 [2024-07-15 07:57:35.487326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:44.313 [2024-07-15 07:57:35.487349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:44.313 [2024-07-15 07:57:35.487378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:44.313 [2024-07-15 07:57:35.487424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.487469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.313 [2024-07-15 07:57:35.487501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.487755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.313 [2024-07-15 07:57:35.487777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.313 [2024-07-15 07:57:35.487789] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487801] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:44.313 [2024-07-15 07:57:35.487815] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.313 [2024-07-15 07:57:35.487828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487848] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487862] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.487907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.487922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.487934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.487963] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:44.313 [2024-07-15 07:57:35.487979] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:44.313 [2024-07-15 07:57:35.487999] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:44.313 [2024-07-15 07:57:35.488012] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:44.313 [2024-07-15 07:57:35.488028] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:44.313 [2024-07-15 07:57:35.488043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:44.313 [2024-07-15 07:57:35.488066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:44.313 [2024-07-15 07:57:35.488091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.488138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.313 [2024-07-15 07:57:35.488181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.313 [2024-07-15 07:57:35.488317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.313 [2024-07-15 07:57:35.488347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.313 [2024-07-15 07:57:35.488360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.313 [2024-07-15 07:57:35.488393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.488442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.313 [2024-07-15 07:57:35.488466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.488519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.313 [2024-07-15 07:57:35.488535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:44.313 [2024-07-15 07:57:35.488576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.313 [2024-07-15 07:57:35.488606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.313 [2024-07-15 07:57:35.488617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.488627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.314 [2024-07-15 07:57:35.488642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.314 [2024-07-15 07:57:35.488655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.488696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.488717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.488730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.314 [2024-07-15 07:57:35.488748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.314 [2024-07-15 07:57:35.488808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.314 [2024-07-15 07:57:35.488828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:44.314 [2024-07-15 07:57:35.488841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:44.314 [2024-07-15 07:57:35.488852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.314 [2024-07-15 07:57:35.488864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.314 [2024-07-15 07:57:35.492893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.314 [2024-07-15 07:57:35.492917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.314 [2024-07-15 07:57:35.492928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.492940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.314 [2024-07-15 07:57:35.492955] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:44.314 [2024-07-15 07:57:35.492970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.493006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.493036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.493059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.314 [2024-07-15 07:57:35.493106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.314 [2024-07-15 07:57:35.493139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.314 [2024-07-15 07:57:35.493294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.314 [2024-07-15 07:57:35.493314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.314 [2024-07-15 07:57:35.493325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.314 [2024-07-15 07:57:35.493453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.493488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.493524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.314 [2024-07-15 07:57:35.493559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.314 [2024-07-15 07:57:35.493590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.314 [2024-07-15 07:57:35.493770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.314 [2024-07-15 07:57:35.493790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.314 [2024-07-15 07:57:35.493801] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493812] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.314 [2024-07-15 07:57:35.493824] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.314 [2024-07-15 07:57:35.493835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493867] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493893] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.314 [2024-07-15 07:57:35.493935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.314 [2024-07-15 07:57:35.493946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.493956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.314 [2024-07-15 07:57:35.493998] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:44.314 [2024-07-15 07:57:35.494030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.494066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.494094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.314 [2024-07-15 07:57:35.494134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.314 [2024-07-15 07:57:35.494173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.314 [2024-07-15 07:57:35.494351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.314 [2024-07-15 07:57:35.494371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.314 [2024-07-15 07:57:35.494382] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494393] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.314 [2024-07-15 07:57:35.494404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.314 [2024-07-15 07:57:35.494415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494441] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494456] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.314 [2024-07-15 07:57:35.494494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.314 [2024-07-15 07:57:35.494506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.314 [2024-07-15 07:57:35.494555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.494585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.494612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.314 [2024-07-15 07:57:35.494646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.314 [2024-07-15 07:57:35.494679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.314 [2024-07-15 07:57:35.494867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.314 [2024-07-15 07:57:35.494896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.314 [2024-07-15 07:57:35.494909] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494928] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.314 [2024-07-15 07:57:35.494942] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.314 [2024-07-15 07:57:35.494953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.494980] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.495006] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.495024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.314 [2024-07-15 07:57:35.495045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.314 [2024-07-15 07:57:35.495057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.314 [2024-07-15 07:57:35.495068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.314 [2024-07-15 07:57:35.495094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:44.314 [2024-07-15 07:57:35.495124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:44.315 [2024-07-15 07:57:35.495152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:44.315 [2024-07-15 07:57:35.495170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:44.315 [2024-07-15 07:57:35.495184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:44.315 [2024-07-15 07:57:35.495198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:44.315 [2024-07-15 07:57:35.495231] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:44.315 [2024-07-15 07:57:35.495244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:44.315 [2024-07-15 07:57:35.495257] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:44.315 [2024-07-15 07:57:35.495306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.495322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.495342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.495364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.495377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.495392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.495414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.315 [2024-07-15 07:57:35.495448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.315 [2024-07-15 07:57:35.495487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.315 [2024-07-15 07:57:35.495640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.495661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.495673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.495689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.315 [2024-07-15 07:57:35.495712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.495729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.495739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.495765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.315 [2024-07-15 07:57:35.495789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.495805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.495822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.495852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.315 [2024-07-15 07:57:35.496018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.496040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.496051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.315 [2024-07-15 07:57:35.496095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.496130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.496176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.315 [2024-07-15 07:57:35.496327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.496347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.496359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.315 [2024-07-15 07:57:35.496401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.496435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.496470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.315 [2024-07-15 07:57:35.496607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.496628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.496639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.315 [2024-07-15 07:57:35.496691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.496749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.496772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.496804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.496825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.496844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.500891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.500925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.500940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:44.315 [2024-07-15 07:57:35.500964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.315 [2024-07-15 07:57:35.500998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.315 [2024-07-15 07:57:35.501032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.315 [2024-07-15 07:57:35.501044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:44.315 [2024-07-15 07:57:35.501056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:44.315 [2024-07-15 07:57:35.501351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.315 [2024-07-15 07:57:35.501395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.315 [2024-07-15 07:57:35.501410] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501421] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:44.315 [2024-07-15 07:57:35.501434] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:44.315 [2024-07-15 07:57:35.501446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501485] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501501] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.315 [2024-07-15 07:57:35.501543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.315 [2024-07-15 07:57:35.501555] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501565] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:44.315 [2024-07-15 07:57:35.501577] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:44.315 [2024-07-15 07:57:35.501588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501605] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501617] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.315 [2024-07-15 07:57:35.501652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.315 [2024-07-15 07:57:35.501663] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501673] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:44.315 [2024-07-15 07:57:35.501691] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:44.315 [2024-07-15 07:57:35.501703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501719] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501731] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.315 [2024-07-15 07:57:35.501774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.315 [2024-07-15 07:57:35.501785] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501794] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:44.315 [2024-07-15 07:57:35.501805] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.315 [2024-07-15 07:57:35.501831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501851] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501863] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.501920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.501930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.501941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.315 [2024-07-15 07:57:35.501977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.315 [2024-07-15 07:57:35.501998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.315 [2024-07-15 07:57:35.502009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.315 [2024-07-15 07:57:35.502019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.316 [2024-07-15 07:57:35.502045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.316 [2024-07-15 07:57:35.502063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.316 [2024-07-15 07:57:35.502073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.316 [2024-07-15 07:57:35.502083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:44.316 [2024-07-15 07:57:35.502106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.316 [2024-07-15 07:57:35.502123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.316 [2024-07-15 07:57:35.502133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.316 [2024-07-15 07:57:35.502143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:44.316 ===================================================== 00:30:44.316 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.316 ===================================================== 00:30:44.316 Controller Capabilities/Features 00:30:44.316 ================================ 00:30:44.316 Vendor ID: 8086 00:30:44.316 Subsystem Vendor ID: 8086 00:30:44.316 Serial Number: SPDK00000000000001 00:30:44.316 Model Number: SPDK bdev Controller 00:30:44.316 Firmware Version: 24.09 00:30:44.316 Recommended Arb Burst: 6 00:30:44.316 IEEE OUI Identifier: e4 d2 5c 00:30:44.316 Multi-path I/O 00:30:44.316 May have multiple subsystem ports: Yes 00:30:44.316 May have multiple controllers: Yes 00:30:44.316 Associated with SR-IOV VF: No 00:30:44.316 Max Data Transfer Size: 131072 00:30:44.316 Max Number of Namespaces: 32 00:30:44.316 Max Number of I/O Queues: 127 00:30:44.316 NVMe Specification Version (VS): 1.3 00:30:44.316 NVMe Specification Version (Identify): 1.3 00:30:44.316 Maximum Queue Entries: 128 00:30:44.316 Contiguous Queues Required: Yes 00:30:44.316 Arbitration Mechanisms Supported 00:30:44.316 Weighted Round Robin: Not Supported 00:30:44.316 Vendor Specific: Not Supported 00:30:44.316 Reset Timeout: 15000 ms 00:30:44.316 Doorbell Stride: 4 bytes 00:30:44.316 NVM Subsystem Reset: Not Supported 00:30:44.316 Command Sets Supported 00:30:44.316 NVM Command Set: Supported 00:30:44.316 Boot Partition: Not Supported 00:30:44.316 Memory Page Size Minimum: 4096 bytes 00:30:44.316 Memory Page Size Maximum: 4096 bytes 00:30:44.316 Persistent Memory Region: Not Supported 00:30:44.316 Optional Asynchronous Events Supported 00:30:44.316 Namespace Attribute Notices: Supported 00:30:44.316 Firmware Activation Notices: Not Supported 00:30:44.316 ANA Change Notices: Not Supported 00:30:44.316 PLE Aggregate Log Change Notices: Not Supported 00:30:44.316 LBA Status Info Alert Notices: Not Supported 00:30:44.316 EGE Aggregate Log Change Notices: Not Supported 00:30:44.316 Normal NVM Subsystem Shutdown event: Not Supported 00:30:44.316 Zone Descriptor Change Notices: Not Supported 00:30:44.316 Discovery Log Change Notices: Not Supported 00:30:44.316 Controller Attributes 00:30:44.316 128-bit Host Identifier: Supported 00:30:44.316 Non-Operational Permissive Mode: Not Supported 00:30:44.316 NVM Sets: Not Supported 00:30:44.316 Read Recovery Levels: Not Supported 00:30:44.316 Endurance Groups: Not Supported 00:30:44.316 Predictable Latency Mode: Not Supported 00:30:44.316 Traffic Based Keep ALive: Not Supported 00:30:44.316 Namespace Granularity: Not Supported 00:30:44.316 SQ Associations: Not Supported 00:30:44.316 UUID List: Not Supported 00:30:44.316 Multi-Domain Subsystem: Not Supported 00:30:44.316 Fixed Capacity Management: Not Supported 00:30:44.316 Variable Capacity Management: Not Supported 00:30:44.316 Delete Endurance Group: Not Supported 00:30:44.316 Delete NVM Set: Not Supported 00:30:44.316 Extended LBA Formats Supported: Not Supported 00:30:44.316 Flexible Data Placement Supported: Not Supported 00:30:44.316 00:30:44.316 Controller Memory Buffer Support 00:30:44.316 ================================ 00:30:44.316 Supported: No 00:30:44.316 00:30:44.316 Persistent Memory Region Support 00:30:44.316 ================================ 00:30:44.316 Supported: No 00:30:44.316 00:30:44.316 Admin Command Set Attributes 00:30:44.316 ============================ 00:30:44.316 Security Send/Receive: Not Supported 00:30:44.316 Format NVM: Not Supported 00:30:44.316 Firmware Activate/Download: Not Supported 00:30:44.316 Namespace Management: Not Supported 00:30:44.316 Device Self-Test: Not Supported 00:30:44.316 Directives: Not Supported 00:30:44.316 NVMe-MI: Not Supported 00:30:44.316 Virtualization Management: Not Supported 00:30:44.316 Doorbell Buffer Config: Not Supported 00:30:44.316 Get LBA Status Capability: Not Supported 00:30:44.316 Command & Feature Lockdown Capability: Not Supported 00:30:44.316 Abort Command Limit: 4 00:30:44.316 Async Event Request Limit: 4 00:30:44.316 Number of Firmware Slots: N/A 00:30:44.316 Firmware Slot 1 Read-Only: N/A 00:30:44.316 Firmware Activation Without Reset: N/A 00:30:44.316 Multiple Update Detection Support: N/A 00:30:44.316 Firmware Update Granularity: No Information Provided 00:30:44.316 Per-Namespace SMART Log: No 00:30:44.316 Asymmetric Namespace Access Log Page: Not Supported 00:30:44.316 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:44.316 Command Effects Log Page: Supported 00:30:44.316 Get Log Page Extended Data: Supported 00:30:44.316 Telemetry Log Pages: Not Supported 00:30:44.316 Persistent Event Log Pages: Not Supported 00:30:44.316 Supported Log Pages Log Page: May Support 00:30:44.316 Commands Supported & Effects Log Page: Not Supported 00:30:44.316 Feature Identifiers & Effects Log Page:May Support 00:30:44.316 NVMe-MI Commands & Effects Log Page: May Support 00:30:44.316 Data Area 4 for Telemetry Log: Not Supported 00:30:44.316 Error Log Page Entries Supported: 128 00:30:44.316 Keep Alive: Supported 00:30:44.316 Keep Alive Granularity: 10000 ms 00:30:44.316 00:30:44.316 NVM Command Set Attributes 00:30:44.316 ========================== 00:30:44.316 Submission Queue Entry Size 00:30:44.316 Max: 64 00:30:44.316 Min: 64 00:30:44.316 Completion Queue Entry Size 00:30:44.316 Max: 16 00:30:44.316 Min: 16 00:30:44.316 Number of Namespaces: 32 00:30:44.316 Compare Command: Supported 00:30:44.316 Write Uncorrectable Command: Not Supported 00:30:44.316 Dataset Management Command: Supported 00:30:44.316 Write Zeroes Command: Supported 00:30:44.316 Set Features Save Field: Not Supported 00:30:44.316 Reservations: Supported 00:30:44.316 Timestamp: Not Supported 00:30:44.316 Copy: Supported 00:30:44.316 Volatile Write Cache: Present 00:30:44.316 Atomic Write Unit (Normal): 1 00:30:44.316 Atomic Write Unit (PFail): 1 00:30:44.316 Atomic Compare & Write Unit: 1 00:30:44.316 Fused Compare & Write: Supported 00:30:44.316 Scatter-Gather List 00:30:44.316 SGL Command Set: Supported 00:30:44.316 SGL Keyed: Supported 00:30:44.316 SGL Bit Bucket Descriptor: Not Supported 00:30:44.316 SGL Metadata Pointer: Not Supported 00:30:44.316 Oversized SGL: Not Supported 00:30:44.316 SGL Metadata Address: Not Supported 00:30:44.316 SGL Offset: Supported 00:30:44.316 Transport SGL Data Block: Not Supported 00:30:44.316 Replay Protected Memory Block: Not Supported 00:30:44.316 00:30:44.316 Firmware Slot Information 00:30:44.316 ========================= 00:30:44.316 Active slot: 1 00:30:44.316 Slot 1 Firmware Revision: 24.09 00:30:44.316 00:30:44.316 00:30:44.316 Commands Supported and Effects 00:30:44.316 ============================== 00:30:44.316 Admin Commands 00:30:44.316 -------------- 00:30:44.316 Get Log Page (02h): Supported 00:30:44.316 Identify (06h): Supported 00:30:44.316 Abort (08h): Supported 00:30:44.316 Set Features (09h): Supported 00:30:44.316 Get Features (0Ah): Supported 00:30:44.316 Asynchronous Event Request (0Ch): Supported 00:30:44.316 Keep Alive (18h): Supported 00:30:44.316 I/O Commands 00:30:44.316 ------------ 00:30:44.316 Flush (00h): Supported LBA-Change 00:30:44.316 Write (01h): Supported LBA-Change 00:30:44.316 Read (02h): Supported 00:30:44.316 Compare (05h): Supported 00:30:44.316 Write Zeroes (08h): Supported LBA-Change 00:30:44.316 Dataset Management (09h): Supported LBA-Change 00:30:44.316 Copy (19h): Supported LBA-Change 00:30:44.316 00:30:44.316 Error Log 00:30:44.316 ========= 00:30:44.316 00:30:44.316 Arbitration 00:30:44.316 =========== 00:30:44.316 Arbitration Burst: 1 00:30:44.316 00:30:44.316 Power Management 00:30:44.316 ================ 00:30:44.316 Number of Power States: 1 00:30:44.316 Current Power State: Power State #0 00:30:44.316 Power State #0: 00:30:44.317 Max Power: 0.00 W 00:30:44.317 Non-Operational State: Operational 00:30:44.317 Entry Latency: Not Reported 00:30:44.317 Exit Latency: Not Reported 00:30:44.317 Relative Read Throughput: 0 00:30:44.317 Relative Read Latency: 0 00:30:44.317 Relative Write Throughput: 0 00:30:44.317 Relative Write Latency: 0 00:30:44.317 Idle Power: Not Reported 00:30:44.317 Active Power: Not Reported 00:30:44.317 Non-Operational Permissive Mode: Not Supported 00:30:44.317 00:30:44.317 Health Information 00:30:44.317 ================== 00:30:44.317 Critical Warnings: 00:30:44.317 Available Spare Space: OK 00:30:44.317 Temperature: OK 00:30:44.317 Device Reliability: OK 00:30:44.317 Read Only: No 00:30:44.317 Volatile Memory Backup: OK 00:30:44.317 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:44.317 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:44.317 Available Spare: 0% 00:30:44.317 Available Spare Threshold: 0% 00:30:44.317 Life Percentage Used:[2024-07-15 07:57:35.502371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.502391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.502411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.502468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:44.317 [2024-07-15 07:57:35.502622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.502643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.502655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.502673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.502771] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:44.317 [2024-07-15 07:57:35.502807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.502829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.317 [2024-07-15 07:57:35.502843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.502872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.317 [2024-07-15 07:57:35.502894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.502908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.317 [2024-07-15 07:57:35.502920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.502933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.317 [2024-07-15 07:57:35.502954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.502969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.502980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.502999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.503035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.503175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.503206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.503219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.503252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.503302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.503359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.503540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.503561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.503572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.503597] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:44.317 [2024-07-15 07:57:35.503619] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:44.317 [2024-07-15 07:57:35.503645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.503714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.503748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.503900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.503922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.503937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.503977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.503992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.504020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.504051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.504182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.504206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.504218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.504255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.504305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.504338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.504499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.504520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.504531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.504568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.504611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.504641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.504771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.504791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.504802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.504839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.504870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.317 [2024-07-15 07:57:35.508906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.317 [2024-07-15 07:57:35.508943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.317 [2024-07-15 07:57:35.509085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.317 [2024-07-15 07:57:35.509106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.317 [2024-07-15 07:57:35.509117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.317 [2024-07-15 07:57:35.509128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.317 [2024-07-15 07:57:35.509149] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:30:44.577 0% 00:30:44.577 Data Units Read: 0 00:30:44.577 Data Units Written: 0 00:30:44.577 Host Read Commands: 0 00:30:44.577 Host Write Commands: 0 00:30:44.577 Controller Busy Time: 0 minutes 00:30:44.577 Power Cycles: 0 00:30:44.577 Power On Hours: 0 hours 00:30:44.577 Unsafe Shutdowns: 0 00:30:44.577 Unrecoverable Media Errors: 0 00:30:44.577 Lifetime Error Log Entries: 0 00:30:44.577 Warning Temperature Time: 0 minutes 00:30:44.577 Critical Temperature Time: 0 minutes 00:30:44.577 00:30:44.577 Number of Queues 00:30:44.577 ================ 00:30:44.577 Number of I/O Submission Queues: 127 00:30:44.577 Number of I/O Completion Queues: 127 00:30:44.577 00:30:44.577 Active Namespaces 00:30:44.577 ================= 00:30:44.577 Namespace ID:1 00:30:44.577 Error Recovery Timeout: Unlimited 00:30:44.577 Command Set Identifier: NVM (00h) 00:30:44.577 Deallocate: Supported 00:30:44.577 Deallocated/Unwritten Error: Not Supported 00:30:44.577 Deallocated Read Value: Unknown 00:30:44.577 Deallocate in Write Zeroes: Not Supported 00:30:44.577 Deallocated Guard Field: 0xFFFF 00:30:44.577 Flush: Supported 00:30:44.577 Reservation: Supported 00:30:44.577 Namespace Sharing Capabilities: Multiple Controllers 00:30:44.577 Size (in LBAs): 131072 (0GiB) 00:30:44.577 Capacity (in LBAs): 131072 (0GiB) 00:30:44.577 Utilization (in LBAs): 131072 (0GiB) 00:30:44.577 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:44.577 EUI64: ABCDEF0123456789 00:30:44.577 UUID: 22e084e1-822f-400f-b42b-ed1cf1eafc8b 00:30:44.577 Thin Provisioning: Not Supported 00:30:44.577 Per-NS Atomic Units: Yes 00:30:44.577 Atomic Boundary Size (Normal): 0 00:30:44.577 Atomic Boundary Size (PFail): 0 00:30:44.577 Atomic Boundary Offset: 0 00:30:44.577 Maximum Single Source Range Length: 65535 00:30:44.577 Maximum Copy Length: 65535 00:30:44.577 Maximum Source Range Count: 1 00:30:44.577 NGUID/EUI64 Never Reused: No 00:30:44.577 Namespace Write Protected: No 00:30:44.577 Number of LBA Formats: 1 00:30:44.577 Current LBA Format: LBA Format #00 00:30:44.577 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:44.577 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:44.577 rmmod nvme_tcp 00:30:44.577 rmmod nvme_fabrics 00:30:44.577 rmmod nvme_keyring 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1184355 ']' 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1184355 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1184355 ']' 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1184355 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184355 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184355' 00:30:44.577 killing process with pid 1184355 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1184355 00:30:44.577 07:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1184355 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:45.952 07:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.494 07:57:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:48.494 00:30:48.494 real 0m7.364s 00:30:48.494 user 0m10.406s 00:30:48.494 sys 0m2.044s 00:30:48.494 07:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:48.494 07:57:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:48.494 ************************************ 00:30:48.494 END TEST nvmf_identify 00:30:48.494 ************************************ 00:30:48.494 07:57:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:48.494 07:57:39 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:48.494 07:57:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:48.494 07:57:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:48.494 07:57:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.494 ************************************ 00:30:48.494 START TEST nvmf_perf 00:30:48.494 ************************************ 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:48.494 * Looking for test storage... 00:30:48.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.494 07:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:50.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:50.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:50.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:50.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:50.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:30:50.399 00:30:50.399 --- 10.0.0.2 ping statistics --- 00:30:50.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.399 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:30:50.399 00:30:50.399 --- 10.0.0.1 ping statistics --- 00:30:50.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.399 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:50.399 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1186687 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1186687 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1186687 ']' 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.400 07:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.400 [2024-07-15 07:57:41.398437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:50.400 [2024-07-15 07:57:41.398594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.400 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.400 [2024-07-15 07:57:41.552484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.659 [2024-07-15 07:57:41.816388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.659 [2024-07-15 07:57:41.816471] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.659 [2024-07-15 07:57:41.816500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.659 [2024-07-15 07:57:41.816522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.659 [2024-07-15 07:57:41.816544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.659 [2024-07-15 07:57:41.816672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.659 [2024-07-15 07:57:41.816742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.659 [2024-07-15 07:57:41.816822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.659 [2024-07-15 07:57:41.816833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:51.225 07:57:42 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:54.530 07:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:54.530 07:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:54.530 07:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:54.530 07:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:55.095 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:55.095 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:55.095 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:55.095 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:55.095 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:55.095 [2024-07-15 07:57:46.285240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.095 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.661 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:55.661 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.919 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:55.919 07:57:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:56.177 07:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.436 [2024-07-15 07:57:47.440026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.436 07:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:56.695 07:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:56.695 07:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:56.695 07:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:56.695 07:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:58.069 Initializing NVMe Controllers 00:30:58.069 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:58.069 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:58.069 Initialization complete. Launching workers. 00:30:58.069 ======================================================== 00:30:58.069 Latency(us) 00:30:58.069 Device Information : IOPS MiB/s Average min max 00:30:58.069 PCIE (0000:88:00.0) NSID 1 from core 0: 75133.14 293.49 425.36 54.07 5299.75 00:30:58.069 ======================================================== 00:30:58.069 Total : 75133.14 293.49 425.36 54.07 5299.75 00:30:58.069 00:30:58.069 07:57:49 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:58.327 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.702 Initializing NVMe Controllers 00:30:59.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:59.702 Initialization complete. Launching workers. 00:30:59.702 ======================================================== 00:30:59.702 Latency(us) 00:30:59.702 Device Information : IOPS MiB/s Average min max 00:30:59.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.70 0.33 12206.03 227.58 45715.00 00:30:59.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 72.74 0.28 13964.10 5495.63 49343.67 00:30:59.702 ======================================================== 00:30:59.702 Total : 157.45 0.62 13018.30 227.58 49343.67 00:30:59.702 00:30:59.702 07:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.702 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.079 Initializing NVMe Controllers 00:31:01.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:01.079 Initialization complete. Launching workers. 00:31:01.079 ======================================================== 00:31:01.079 Latency(us) 00:31:01.079 Device Information : IOPS MiB/s Average min max 00:31:01.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5235.93 20.45 6146.33 1265.66 12230.41 00:31:01.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3777.95 14.76 8515.85 5521.58 17114.04 00:31:01.079 ======================================================== 00:31:01.079 Total : 9013.87 35.21 7139.45 1265.66 17114.04 00:31:01.079 00:31:01.079 07:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:01.079 07:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:01.079 07:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.079 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.362 Initializing NVMe Controllers 00:31:04.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.362 Controller IO queue size 128, less than required. 00:31:04.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.362 Controller IO queue size 128, less than required. 00:31:04.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:04.362 Initialization complete. Launching workers. 00:31:04.362 ======================================================== 00:31:04.362 Latency(us) 00:31:04.362 Device Information : IOPS MiB/s Average min max 00:31:04.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1145.98 286.49 116844.07 79181.91 310652.84 00:31:04.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 552.28 138.07 251353.32 128072.37 509837.14 00:31:04.362 ======================================================== 00:31:04.362 Total : 1698.26 424.57 160587.24 79181.91 509837.14 00:31:04.362 00:31:04.362 07:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:04.362 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.362 No valid NVMe controllers or AIO or URING devices found 00:31:04.362 Initializing NVMe Controllers 00:31:04.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.362 Controller IO queue size 128, less than required. 00:31:04.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.362 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:04.362 Controller IO queue size 128, less than required. 00:31:04.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.362 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:04.362 WARNING: Some requested NVMe devices were skipped 00:31:04.362 07:57:55 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:04.362 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.897 Initializing NVMe Controllers 00:31:06.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.897 Controller IO queue size 128, less than required. 00:31:06.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.897 Controller IO queue size 128, less than required. 00:31:06.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:06.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:06.897 Initialization complete. Launching workers. 00:31:06.897 00:31:06.897 ==================== 00:31:06.897 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:06.897 TCP transport: 00:31:06.897 polls: 6691 00:31:06.897 idle_polls: 1222 00:31:06.897 sock_completions: 5469 00:31:06.897 nvme_completions: 4451 00:31:06.897 submitted_requests: 6634 00:31:06.897 queued_requests: 1 00:31:06.897 00:31:06.897 ==================== 00:31:06.897 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:06.897 TCP transport: 00:31:06.897 polls: 11016 00:31:06.897 idle_polls: 6072 00:31:06.897 sock_completions: 4944 00:31:06.897 nvme_completions: 4777 00:31:06.897 submitted_requests: 7142 00:31:06.897 queued_requests: 1 00:31:06.897 ======================================================== 00:31:06.897 Latency(us) 00:31:06.897 Device Information : IOPS MiB/s Average min max 00:31:06.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1110.03 277.51 127810.03 66165.58 435269.47 00:31:06.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1191.34 297.84 107288.27 57678.53 302273.45 00:31:06.897 ======================================================== 00:31:06.897 Total : 2301.37 575.34 117186.58 57678.53 435269.47 00:31:06.897 00:31:07.155 07:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:07.155 07:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.413 07:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:07.413 07:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:07.413 07:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6746dfe6-305b-40a5-8254-c3f14274318b 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6746dfe6-305b-40a5-8254-c3f14274318b 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=6746dfe6-305b-40a5-8254-c3f14274318b 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:10.696 07:58:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:11.264 { 00:31:11.264 "uuid": "6746dfe6-305b-40a5-8254-c3f14274318b", 00:31:11.264 "name": "lvs_0", 00:31:11.264 "base_bdev": "Nvme0n1", 00:31:11.264 "total_data_clusters": 238234, 00:31:11.264 "free_clusters": 238234, 00:31:11.264 "block_size": 512, 00:31:11.264 "cluster_size": 4194304 00:31:11.264 } 00:31:11.264 ]' 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6746dfe6-305b-40a5-8254-c3f14274318b") .free_clusters' 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6746dfe6-305b-40a5-8254-c3f14274318b") .cluster_size' 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:31:11.264 952936 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:11.264 07:58:02 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6746dfe6-305b-40a5-8254-c3f14274318b lbd_0 20480 00:31:11.522 07:58:02 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d4909a85-a025-4676-83aa-c15a64e95d4c 00:31:11.522 07:58:02 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore d4909a85-a025-4676-83aa-c15a64e95d4c lvs_n_0 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=79a5c526-326b-4d37-b3ec-2cf214799516 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 79a5c526-326b-4d37-b3ec-2cf214799516 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=79a5c526-326b-4d37-b3ec-2cf214799516 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:12.479 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:12.737 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:12.737 { 00:31:12.737 "uuid": "6746dfe6-305b-40a5-8254-c3f14274318b", 00:31:12.737 "name": "lvs_0", 00:31:12.737 "base_bdev": "Nvme0n1", 00:31:12.737 "total_data_clusters": 238234, 00:31:12.737 "free_clusters": 233114, 00:31:12.737 "block_size": 512, 00:31:12.737 "cluster_size": 4194304 00:31:12.737 }, 00:31:12.737 { 00:31:12.737 "uuid": "79a5c526-326b-4d37-b3ec-2cf214799516", 00:31:12.738 "name": "lvs_n_0", 00:31:12.738 "base_bdev": "d4909a85-a025-4676-83aa-c15a64e95d4c", 00:31:12.738 "total_data_clusters": 5114, 00:31:12.738 "free_clusters": 5114, 00:31:12.738 "block_size": 512, 00:31:12.738 "cluster_size": 4194304 00:31:12.738 } 00:31:12.738 ]' 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="79a5c526-326b-4d37-b3ec-2cf214799516") .free_clusters' 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="79a5c526-326b-4d37-b3ec-2cf214799516") .cluster_size' 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:12.738 20456 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:12.738 07:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 79a5c526-326b-4d37-b3ec-2cf214799516 lbd_nest_0 20456 00:31:12.995 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=754dc53d-02f4-439e-a80c-96459539e0df 00:31:12.995 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:13.253 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:13.253 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 754dc53d-02f4-439e-a80c-96459539e0df 00:31:13.511 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.769 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:13.769 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:13.769 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:13.769 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:13.769 07:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:13.769 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.990 Initializing NVMe Controllers 00:31:25.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.990 Initialization complete. Launching workers. 00:31:25.990 ======================================================== 00:31:25.990 Latency(us) 00:31:25.990 Device Information : IOPS MiB/s Average min max 00:31:25.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.09 0.02 20798.97 280.62 45753.93 00:31:25.990 ======================================================== 00:31:25.990 Total : 48.09 0.02 20798.97 280.62 45753.93 00:31:25.990 00:31:25.990 07:58:15 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:25.990 07:58:15 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:25.990 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.967 Initializing NVMe Controllers 00:31:35.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:35.967 Initialization complete. Launching workers. 00:31:35.967 ======================================================== 00:31:35.967 Latency(us) 00:31:35.967 Device Information : IOPS MiB/s Average min max 00:31:35.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.40 10.42 11994.88 5040.25 47897.72 00:31:35.967 ======================================================== 00:31:35.967 Total : 83.40 10.42 11994.88 5040.25 47897.72 00:31:35.967 00:31:35.967 07:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:35.967 07:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:35.967 07:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:35.967 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.943 Initializing NVMe Controllers 00:31:45.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.943 Initialization complete. Launching workers. 00:31:45.943 ======================================================== 00:31:45.943 Latency(us) 00:31:45.943 Device Information : IOPS MiB/s Average min max 00:31:45.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4577.51 2.24 6989.74 625.55 15128.25 00:31:45.943 ======================================================== 00:31:45.943 Total : 4577.51 2.24 6989.74 625.55 15128.25 00:31:45.943 00:31:45.943 07:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:45.943 07:58:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:45.943 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.951 Initializing NVMe Controllers 00:31:55.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:55.951 Initialization complete. Launching workers. 00:31:55.951 ======================================================== 00:31:55.951 Latency(us) 00:31:55.951 Device Information : IOPS MiB/s Average min max 00:31:55.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1897.90 237.24 16870.66 1081.97 32792.29 00:31:55.951 ======================================================== 00:31:55.951 Total : 1897.90 237.24 16870.66 1081.97 32792.29 00:31:55.951 00:31:55.951 07:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:55.951 07:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:55.951 07:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:55.951 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.973 Initializing NVMe Controllers 00:32:05.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:05.973 Controller IO queue size 128, less than required. 00:32:05.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:05.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:05.973 Initialization complete. Launching workers. 00:32:05.973 ======================================================== 00:32:05.973 Latency(us) 00:32:05.973 Device Information : IOPS MiB/s Average min max 00:32:05.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8530.30 4.17 15009.88 2041.25 32162.84 00:32:05.973 ======================================================== 00:32:05.973 Total : 8530.30 4.17 15009.88 2041.25 32162.84 00:32:05.973 00:32:05.974 07:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:05.974 07:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:05.974 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.178 Initializing NVMe Controllers 00:32:18.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.178 Controller IO queue size 128, less than required. 00:32:18.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:18.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:18.178 Initialization complete. Launching workers. 00:32:18.178 ======================================================== 00:32:18.178 Latency(us) 00:32:18.178 Device Information : IOPS MiB/s Average min max 00:32:18.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.40 147.55 108731.83 31404.98 239603.67 00:32:18.178 ======================================================== 00:32:18.178 Total : 1180.40 147.55 108731.83 31404.98 239603.67 00:32:18.178 00:32:18.178 07:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.178 07:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 754dc53d-02f4-439e-a80c-96459539e0df 00:32:18.178 07:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:18.178 07:59:08 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d4909a85-a025-4676-83aa-c15a64e95d4c 00:32:18.178 07:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.435 rmmod nvme_tcp 00:32:18.435 rmmod nvme_fabrics 00:32:18.435 rmmod nvme_keyring 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1186687 ']' 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1186687 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1186687 ']' 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1186687 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.435 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186687 00:32:18.436 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.436 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.436 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186687' 00:32:18.436 killing process with pid 1186687 00:32:18.436 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1186687 00:32:18.436 07:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1186687 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.966 07:59:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.498 07:59:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.498 00:32:23.498 real 1m34.989s 00:32:23.498 user 5m52.510s 00:32:23.498 sys 0m14.828s 00:32:23.498 07:59:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.498 07:59:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:23.498 ************************************ 00:32:23.498 END TEST nvmf_perf 00:32:23.498 ************************************ 00:32:23.498 07:59:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:23.498 07:59:14 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:23.498 07:59:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:23.498 07:59:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.498 07:59:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.498 ************************************ 00:32:23.498 START TEST nvmf_fio_host 00:32:23.498 ************************************ 00:32:23.498 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:23.498 * Looking for test storage... 00:32:23.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.499 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:25.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:25.397 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.397 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:25.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:25.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:25.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:32:25.398 00:32:25.398 --- 10.0.0.2 ping statistics --- 00:32:25.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.398 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:32:25.398 00:32:25.398 --- 10.0.0.1 ping statistics --- 00:32:25.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.398 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1199179 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1199179 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1199179 ']' 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:25.398 07:59:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.398 [2024-07-15 07:59:16.536983] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:25.398 [2024-07-15 07:59:16.537118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.398 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.656 [2024-07-15 07:59:16.674945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:25.914 [2024-07-15 07:59:16.930299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.914 [2024-07-15 07:59:16.930373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.914 [2024-07-15 07:59:16.930401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.914 [2024-07-15 07:59:16.930422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.914 [2024-07-15 07:59:16.930443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.914 [2024-07-15 07:59:16.930619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.914 [2024-07-15 07:59:16.930707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.914 [2024-07-15 07:59:16.930748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.914 [2024-07-15 07:59:16.930758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:26.480 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:26.480 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:32:26.480 07:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:26.480 [2024-07-15 07:59:17.671007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.480 07:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:26.480 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:26.480 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.739 07:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:26.998 Malloc1 00:32:26.998 07:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.257 07:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:27.515 07:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.774 [2024-07-15 07:59:18.760152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.774 07:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:28.032 07:59:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.317 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:28.317 fio-3.35 00:32:28.317 Starting 1 thread 00:32:28.317 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.845 [2024-07-15 07:59:21.796956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:32:30.845 [2024-07-15 07:59:21.797040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:32:30.845 [2024-07-15 07:59:21.797063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:32:30.845 [2024-07-15 07:59:21.797082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:32:30.845 [2024-07-15 07:59:21.797101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:32:30.845 00:32:30.845 test: (groupid=0, jobs=1): err= 0: pid=1199657: Mon Jul 15 07:59:21 2024 00:32:30.845 read: IOPS=6511, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2008msec) 00:32:30.845 slat (usec): min=2, max=190, avg= 3.48, stdev= 2.56 00:32:30.845 clat (usec): min=3564, max=18742, avg=10760.72, stdev=924.07 00:32:30.845 lat (usec): min=3597, max=18745, avg=10764.21, stdev=923.92 00:32:30.845 clat percentiles (usec): 00:32:30.845 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:32:30.845 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:32:30.845 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:32:30.845 | 99.00th=[12780], 99.50th=[13173], 99.90th=[16581], 99.95th=[17433], 00:32:30.845 | 99.99th=[18744] 00:32:30.845 bw ( KiB/s): min=24600, max=26840, per=99.88%, avg=26016.00, stdev=987.75, samples=4 00:32:30.845 iops : min= 6150, max= 6710, avg=6504.00, stdev=246.94, samples=4 00:32:30.845 write: IOPS=6520, BW=25.5MiB/s (26.7MB/s)(51.1MiB/2008msec); 0 zone resets 00:32:30.845 slat (usec): min=2, max=152, avg= 3.69, stdev= 1.88 00:32:30.845 clat (usec): min=1909, max=17164, avg=8789.77, stdev=780.54 00:32:30.845 lat (usec): min=1926, max=17167, avg=8793.46, stdev=780.48 00:32:30.845 clat percentiles (usec): 00:32:30.845 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8225], 00:32:30.845 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:32:30.845 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:32:30.845 | 99.00th=[10421], 99.50th=[10945], 99.90th=[15270], 99.95th=[16319], 00:32:30.845 | 99.99th=[16581] 00:32:30.845 bw ( KiB/s): min=25816, max=26368, per=99.95%, avg=26070.00, stdev=265.04, samples=4 00:32:30.845 iops : min= 6454, max= 6592, avg=6517.50, stdev=66.26, samples=4 00:32:30.845 lat (msec) : 2=0.01%, 4=0.08%, 10=57.51%, 20=42.40% 00:32:30.845 cpu : usr=64.47%, sys=31.94%, ctx=79, majf=0, minf=1536 00:32:30.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:30.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:30.845 issued rwts: total=13076,13094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:30.845 00:32:30.845 Run status group 0 (all jobs): 00:32:30.845 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2008-2008msec 00:32:30.845 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2008-2008msec 00:32:30.845 ----------------------------------------------------- 00:32:30.845 Suppressions used: 00:32:30.845 count bytes template 00:32:30.845 1 57 /usr/src/fio/parse.c 00:32:30.845 1 8 libtcmalloc_minimal.so 00:32:30.845 ----------------------------------------------------- 00:32:30.845 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:30.845 07:59:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:31.103 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:31.103 fio-3.35 00:32:31.103 Starting 1 thread 00:32:31.360 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.885 00:32:33.885 test: (groupid=0, jobs=1): err= 0: pid=1199993: Mon Jul 15 07:59:24 2024 00:32:33.885 read: IOPS=6236, BW=97.4MiB/s (102MB/s)(196MiB/2007msec) 00:32:33.885 slat (usec): min=3, max=131, avg= 5.10, stdev= 2.14 00:32:33.885 clat (usec): min=3663, max=24880, avg=11880.68, stdev=2522.13 00:32:33.885 lat (usec): min=3668, max=24885, avg=11885.78, stdev=2522.17 00:32:33.885 clat percentiles (usec): 00:32:33.885 | 1.00th=[ 6456], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9765], 00:32:33.885 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12387], 00:32:33.885 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15270], 95.00th=[16319], 00:32:33.885 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20579], 99.95th=[21103], 00:32:33.885 | 99.99th=[21890] 00:32:33.885 bw ( KiB/s): min=40672, max=57344, per=49.38%, avg=49272.00, stdev=9067.45, samples=4 00:32:33.885 iops : min= 2542, max= 3584, avg=3079.50, stdev=566.72, samples=4 00:32:33.885 write: IOPS=3627, BW=56.7MiB/s (59.4MB/s)(101MiB/1789msec); 0 zone resets 00:32:33.885 slat (usec): min=33, max=165, avg=36.37, stdev= 5.68 00:32:33.885 clat (usec): min=7810, max=26667, avg=15397.55, stdev=2582.90 00:32:33.885 lat (usec): min=7845, max=26701, avg=15433.93, stdev=2582.85 00:32:33.885 clat percentiles (usec): 00:32:33.885 | 1.00th=[10159], 5.00th=[11469], 10.00th=[12125], 20.00th=[13042], 00:32:33.885 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15270], 60.00th=[16057], 00:32:33.885 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18744], 95.00th=[19792], 00:32:33.885 | 99.00th=[21365], 99.50th=[21890], 99.90th=[25822], 99.95th=[26346], 00:32:33.885 | 99.99th=[26608] 00:32:33.885 bw ( KiB/s): min=42016, max=60416, per=88.65%, avg=51456.00, stdev=9780.66, samples=4 00:32:33.885 iops : min= 2626, max= 3776, avg=3216.00, stdev=611.29, samples=4 00:32:33.885 lat (msec) : 4=0.05%, 10=15.50%, 20=82.89%, 50=1.56% 00:32:33.885 cpu : usr=73.69%, sys=23.92%, ctx=42, majf=0, minf=2080 00:32:33.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:32:33.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:33.886 issued rwts: total=12517,6490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:33.886 00:32:33.886 Run status group 0 (all jobs): 00:32:33.886 READ: bw=97.4MiB/s (102MB/s), 97.4MiB/s-97.4MiB/s (102MB/s-102MB/s), io=196MiB (205MB), run=2007-2007msec 00:32:33.886 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=101MiB (106MB), run=1789-1789msec 00:32:33.886 ----------------------------------------------------- 00:32:33.886 Suppressions used: 00:32:33.886 count bytes template 00:32:33.886 1 57 /usr/src/fio/parse.c 00:32:33.886 186 17856 /usr/src/fio/iolog.c 00:32:33.886 1 8 libtcmalloc_minimal.so 00:32:33.886 ----------------------------------------------------- 00:32:33.886 00:32:33.886 07:59:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:32:34.144 07:59:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:37.430 Nvme0n1 00:32:37.430 07:59:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d83c505c-00f1-40a3-ac89-03b17ec0b5f7 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d83c505c-00f1-40a3-ac89-03b17ec0b5f7 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d83c505c-00f1-40a3-ac89-03b17ec0b5f7 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:40.718 { 00:32:40.718 "uuid": "d83c505c-00f1-40a3-ac89-03b17ec0b5f7", 00:32:40.718 "name": "lvs_0", 00:32:40.718 "base_bdev": "Nvme0n1", 00:32:40.718 "total_data_clusters": 930, 00:32:40.718 "free_clusters": 930, 00:32:40.718 "block_size": 512, 00:32:40.718 "cluster_size": 1073741824 00:32:40.718 } 00:32:40.718 ]' 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d83c505c-00f1-40a3-ac89-03b17ec0b5f7") .free_clusters' 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d83c505c-00f1-40a3-ac89-03b17ec0b5f7") .cluster_size' 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:40.718 952320 00:32:40.718 07:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:40.718 2afb7fe9-d90a-4a75-86aa-4009e2b0c710 00:32:40.978 07:59:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:40.978 07:59:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:41.236 07:59:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:41.495 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:41.753 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:41.753 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:41.753 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:41.753 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:41.753 07:59:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.753 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:41.753 fio-3.35 00:32:41.753 Starting 1 thread 00:32:42.011 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.536 00:32:44.536 test: (groupid=0, jobs=1): err= 0: pid=1201384: Mon Jul 15 07:59:35 2024 00:32:44.536 read: IOPS=4425, BW=17.3MiB/s (18.1MB/s)(34.8MiB/2012msec) 00:32:44.536 slat (usec): min=2, max=168, avg= 3.77, stdev= 2.82 00:32:44.536 clat (usec): min=1276, max=172744, avg=15829.54, stdev=13129.58 00:32:44.536 lat (usec): min=1280, max=172802, avg=15833.30, stdev=13129.95 00:32:44.536 clat percentiles (msec): 00:32:44.536 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:32:44.536 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:32:44.536 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:44.536 | 99.00th=[ 21], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:44.536 | 99.99th=[ 174] 00:32:44.536 bw ( KiB/s): min=12758, max=19488, per=99.81%, avg=17671.50, stdev=3278.82, samples=4 00:32:44.536 iops : min= 3189, max= 4872, avg=4417.75, stdev=819.95, samples=4 00:32:44.536 write: IOPS=4430, BW=17.3MiB/s (18.1MB/s)(34.8MiB/2012msec); 0 zone resets 00:32:44.536 slat (usec): min=3, max=121, avg= 3.98, stdev= 2.08 00:32:44.536 clat (usec): min=467, max=170304, avg=12924.66, stdev=12386.68 00:32:44.536 lat (usec): min=470, max=170314, avg=12928.64, stdev=12387.03 00:32:44.536 clat percentiles (msec): 00:32:44.536 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:44.536 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:32:44.536 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:44.536 | 99.00th=[ 18], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:44.536 | 99.99th=[ 171] 00:32:44.536 bw ( KiB/s): min=13333, max=19320, per=99.87%, avg=17699.25, stdev=2913.18, samples=4 00:32:44.536 iops : min= 3333, max= 4830, avg=4424.75, stdev=728.42, samples=4 00:32:44.536 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:44.536 lat (msec) : 2=0.02%, 4=0.11%, 10=2.04%, 20=96.86%, 50=0.24% 00:32:44.536 lat (msec) : 250=0.72% 00:32:44.536 cpu : usr=57.73%, sys=39.48%, ctx=72, majf=0, minf=1535 00:32:44.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:44.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:44.536 issued rwts: total=8905,8914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:44.536 00:32:44.536 Run status group 0 (all jobs): 00:32:44.536 READ: bw=17.3MiB/s (18.1MB/s), 17.3MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=34.8MiB (36.5MB), run=2012-2012msec 00:32:44.536 WRITE: bw=17.3MiB/s (18.1MB/s), 17.3MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=34.8MiB (36.5MB), run=2012-2012msec 00:32:44.536 ----------------------------------------------------- 00:32:44.536 Suppressions used: 00:32:44.536 count bytes template 00:32:44.536 1 58 /usr/src/fio/parse.c 00:32:44.536 1 8 libtcmalloc_minimal.so 00:32:44.536 ----------------------------------------------------- 00:32:44.536 00:32:44.536 07:59:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:44.795 07:59:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=5e230458-51fc-4696-8622-86c0baf21011 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 5e230458-51fc-4696-8622-86c0baf21011 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5e230458-51fc-4696-8622-86c0baf21011 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:46.171 { 00:32:46.171 "uuid": "d83c505c-00f1-40a3-ac89-03b17ec0b5f7", 00:32:46.171 "name": "lvs_0", 00:32:46.171 "base_bdev": "Nvme0n1", 00:32:46.171 "total_data_clusters": 930, 00:32:46.171 "free_clusters": 0, 00:32:46.171 "block_size": 512, 00:32:46.171 "cluster_size": 1073741824 00:32:46.171 }, 00:32:46.171 { 00:32:46.171 "uuid": "5e230458-51fc-4696-8622-86c0baf21011", 00:32:46.171 "name": "lvs_n_0", 00:32:46.171 "base_bdev": "2afb7fe9-d90a-4a75-86aa-4009e2b0c710", 00:32:46.171 "total_data_clusters": 237847, 00:32:46.171 "free_clusters": 237847, 00:32:46.171 "block_size": 512, 00:32:46.171 "cluster_size": 4194304 00:32:46.171 } 00:32:46.171 ]' 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5e230458-51fc-4696-8622-86c0baf21011") .free_clusters' 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5e230458-51fc-4696-8622-86c0baf21011") .cluster_size' 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:46.171 951388 00:32:46.171 07:59:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:47.547 b0b639cd-479b-4c02-9ccd-ea60d849d979 00:32:47.547 07:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:47.547 07:59:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:47.833 07:59:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:48.092 07:59:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:48.351 fio-3.35 00:32:48.351 Starting 1 thread 00:32:48.351 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.883 00:32:50.883 test: (groupid=0, jobs=1): err= 0: pid=1202238: Mon Jul 15 07:59:41 2024 00:32:50.883 read: IOPS=4350, BW=17.0MiB/s (17.8MB/s)(34.2MiB/2013msec) 00:32:50.883 slat (usec): min=3, max=210, avg= 3.79, stdev= 3.20 00:32:50.883 clat (usec): min=6162, max=27270, avg=16161.14, stdev=1477.67 00:32:50.883 lat (usec): min=6175, max=27274, avg=16164.93, stdev=1477.48 00:32:50.883 clat percentiles (usec): 00:32:50.883 | 1.00th=[12911], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:32:50.883 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:32:50.883 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:32:50.883 | 99.00th=[19530], 99.50th=[20055], 99.90th=[24511], 99.95th=[26346], 00:32:50.883 | 99.99th=[27395] 00:32:50.883 bw ( KiB/s): min=16488, max=17712, per=99.86%, avg=17378.00, stdev=594.82, samples=4 00:32:50.883 iops : min= 4122, max= 4428, avg=4344.50, stdev=148.70, samples=4 00:32:50.883 write: IOPS=4346, BW=17.0MiB/s (17.8MB/s)(34.2MiB/2013msec); 0 zone resets 00:32:50.883 slat (usec): min=3, max=177, avg= 4.02, stdev= 2.42 00:32:50.883 clat (usec): min=3110, max=26022, avg=13073.58, stdev=1303.89 00:32:50.883 lat (usec): min=3121, max=26026, avg=13077.60, stdev=1303.80 00:32:50.883 clat percentiles (usec): 00:32:50.883 | 1.00th=[10290], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:32:50.883 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:32:50.883 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14615], 95.00th=[15008], 00:32:50.883 | 99.00th=[16057], 99.50th=[16712], 99.90th=[23987], 99.95th=[24249], 00:32:50.883 | 99.99th=[26084] 00:32:50.883 bw ( KiB/s): min=17216, max=17600, per=99.98%, avg=17384.00, stdev=194.10, samples=4 00:32:50.883 iops : min= 4304, max= 4400, avg=4346.00, stdev=48.52, samples=4 00:32:50.883 lat (msec) : 4=0.02%, 10=0.37%, 20=99.32%, 50=0.30% 00:32:50.883 cpu : usr=65.56%, sys=31.66%, ctx=84, majf=0, minf=1534 00:32:50.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:50.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.883 issued rwts: total=8758,8750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.883 00:32:50.883 Run status group 0 (all jobs): 00:32:50.883 READ: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=34.2MiB (35.9MB), run=2013-2013msec 00:32:50.883 WRITE: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=34.2MiB (35.8MB), run=2013-2013msec 00:32:50.883 ----------------------------------------------------- 00:32:50.883 Suppressions used: 00:32:50.883 count bytes template 00:32:50.883 1 58 /usr/src/fio/parse.c 00:32:50.883 1 8 libtcmalloc_minimal.so 00:32:50.883 ----------------------------------------------------- 00:32:50.883 00:32:51.143 07:59:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:51.401 07:59:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:51.401 07:59:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:55.582 07:59:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:55.839 07:59:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:59.123 07:59:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:59.123 07:59:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.031 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.031 rmmod nvme_tcp 00:33:01.031 rmmod nvme_fabrics 00:33:01.031 rmmod nvme_keyring 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1199179 ']' 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1199179 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1199179 ']' 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1199179 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1199179 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1199179' 00:33:01.032 killing process with pid 1199179 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1199179 00:33:01.032 07:59:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1199179 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.412 07:59:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.949 07:59:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:04.949 00:33:04.949 real 0m41.369s 00:33:04.949 user 2m36.243s 00:33:04.949 sys 0m8.059s 00:33:04.949 07:59:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:04.949 07:59:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.949 ************************************ 00:33:04.949 END TEST nvmf_fio_host 00:33:04.949 ************************************ 00:33:04.949 07:59:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:04.949 07:59:55 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:04.949 07:59:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:04.949 07:59:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:04.949 07:59:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.949 ************************************ 00:33:04.949 START TEST nvmf_failover 00:33:04.949 ************************************ 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:04.949 * Looking for test storage... 00:33:04.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:04.949 07:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:06.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:06.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:06.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:06.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.855 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:06.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:33:06.856 00:33:06.856 --- 10.0.0.2 ping statistics --- 00:33:06.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.856 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:33:06.856 00:33:06.856 --- 10.0.0.1 ping statistics --- 00:33:06.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.856 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1205733 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1205733 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1205733 ']' 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:06.856 07:59:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:06.856 [2024-07-15 07:59:57.849544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:06.856 [2024-07-15 07:59:57.849690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.856 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.856 [2024-07-15 07:59:57.982977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:07.116 [2024-07-15 07:59:58.207342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.116 [2024-07-15 07:59:58.207432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.116 [2024-07-15 07:59:58.207463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.116 [2024-07-15 07:59:58.207481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.116 [2024-07-15 07:59:58.207499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.116 [2024-07-15 07:59:58.207641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.116 [2024-07-15 07:59:58.207679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.116 [2024-07-15 07:59:58.207688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.712 07:59:58 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:07.973 [2024-07-15 07:59:59.054359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.973 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:08.231 Malloc0 00:33:08.231 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.489 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.747 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.004 [2024-07-15 08:00:00.181100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.004 08:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:09.261 [2024-07-15 08:00:00.445748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:09.261 08:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:09.518 [2024-07-15 08:00:00.702618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1206089 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1206089 /var/tmp/bdevperf.sock 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1206089 ']' 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:09.518 08:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:09.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:09.519 08:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:09.519 08:00:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:10.891 08:00:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:10.891 08:00:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:10.891 08:00:01 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.148 NVMe0n1 00:33:11.148 08:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.406 00:33:11.406 08:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1206413 00:33:11.406 08:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.406 08:00:02 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:12.781 08:00:03 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.781 [2024-07-15 08:00:03.847787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.847900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.847926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.847945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.847963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.847981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.847998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.848993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.781 [2024-07-15 08:00:03.849657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 [2024-07-15 08:00:03.849791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:12.782 08:00:03 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:16.092 08:00:06 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.092 00:33:16.092 08:00:07 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:16.350 [2024-07-15 08:00:07.473734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.473986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.350 [2024-07-15 08:00:07.474171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.474996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.475014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.475031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.475048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 [2024-07-15 08:00:07.475069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.351 08:00:07 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:19.630 08:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.630 [2024-07-15 08:00:10.735637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.630 08:00:10 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:20.563 08:00:11 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:20.823 [2024-07-15 08:00:11.994028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 [2024-07-15 08:00:11.994376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:20.823 08:00:12 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1206413 00:33:27.378 0 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1206089 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1206089 ']' 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1206089 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1206089 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1206089' 00:33:27.378 killing process with pid 1206089 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1206089 00:33:27.378 08:00:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1206089 00:33:27.646 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.646 [2024-07-15 08:00:00.801901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:27.646 [2024-07-15 08:00:00.802069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206089 ] 00:33:27.646 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.646 [2024-07-15 08:00:00.933031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.646 [2024-07-15 08:00:01.169189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.646 Running I/O for 15 seconds... 00:33:27.646 [2024-07-15 08:00:03.850937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.850995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.646 [2024-07-15 08:00:03.851454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.646 [2024-07-15 08:00:03.851476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.851968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.851990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.852960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.852983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.647 [2024-07-15 08:00:03.853475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.647 [2024-07-15 08:00:03.853496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.853629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.853673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.853975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.853999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.648 [2024-07-15 08:00:03.854021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.854955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.854978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.648 [2024-07-15 08:00:03.855498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.648 [2024-07-15 08:00:03.855521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.855957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.855980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.649 [2024-07-15 08:00:03.856321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56816 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56824 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56832 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56840 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56848 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56856 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56864 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.856926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.856943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.856960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56872 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.856984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56880 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56888 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56896 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56904 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56912 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56920 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56928 len:8 PRP1 0x0 PRP2 0x0 00:33:27.649 [2024-07-15 08:00:03.857502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.649 [2024-07-15 08:00:03.857521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.649 [2024-07-15 08:00:03.857537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.649 [2024-07-15 08:00:03.857571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56936 len:8 PRP1 0x0 PRP2 0x0 00:33:27.650 [2024-07-15 08:00:03.857591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:03.857890] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:33:27.650 [2024-07-15 08:00:03.857924] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:27.650 [2024-07-15 08:00:03.857978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:03.858006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:03.858030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:03.858052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:03.858074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:03.858095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:03.858116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:03.858137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:03.858157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.650 [2024-07-15 08:00:03.858251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:27.650 [2024-07-15 08:00:03.862127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.650 [2024-07-15 08:00:03.990557] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:27.650 [2024-07-15 08:00:07.470202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:07.470305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.470334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:07.470358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.470380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:07.470401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.470423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.650 [2024-07-15 08:00:07.470444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.470464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:27.650 [2024-07-15 08:00:07.476982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.650 [2024-07-15 08:00:07.477390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.477969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.477991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.650 [2024-07-15 08:00:07.478013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.650 [2024-07-15 08:00:07.478035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.478974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.478996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.651 [2024-07-15 08:00:07.479479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.651 [2024-07-15 08:00:07.479501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.479980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.480002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.480047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.480092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.652 [2024-07-15 08:00:07.480141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.652 [2024-07-15 08:00:07.480943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.652 [2024-07-15 08:00:07.480967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.480989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.481972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.481995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.653 [2024-07-15 08:00:07.482393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.653 [2024-07-15 08:00:07.482475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:200 len:8 PRP1 0x0 PRP2 0x0 00:33:27.653 [2024-07-15 08:00:07.482496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.653 [2024-07-15 08:00:07.482545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.653 [2024-07-15 08:00:07.482563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:208 len:8 PRP1 0x0 PRP2 0x0 00:33:27.653 [2024-07-15 08:00:07.482588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.653 [2024-07-15 08:00:07.482609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.653 [2024-07-15 08:00:07.482626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.653 [2024-07-15 08:00:07.482652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:216 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.482673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.482693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.482709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.482727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.482746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.482765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.482782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.482799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:232 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.482818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.482838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.482855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.482872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:240 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.482900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.482921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.482946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.482963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:248 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.482982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:264 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:272 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:280 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:296 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:304 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:312 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.654 [2024-07-15 08:00:07.483623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.654 [2024-07-15 08:00:07.483640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:8 PRP1 0x0 PRP2 0x0 00:33:27.654 [2024-07-15 08:00:07.483659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:07.483946] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:33:27.654 [2024-07-15 08:00:07.483978] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:27.654 [2024-07-15 08:00:07.484001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.654 [2024-07-15 08:00:07.484074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:27.654 [2024-07-15 08:00:07.487936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.654 [2024-07-15 08:00:07.650438] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:27.654 [2024-07-15 08:00:11.994152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.654 [2024-07-15 08:00:11.994213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.994242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.654 [2024-07-15 08:00:11.994264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.994287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.654 [2024-07-15 08:00:11.994309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.994331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:27.654 [2024-07-15 08:00:11.994352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.994372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:27.654 [2024-07-15 08:00:11.995437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.995964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.996010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.654 [2024-07-15 08:00:11.996033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.654 [2024-07-15 08:00:11.996056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.996957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.996979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.655 [2024-07-15 08:00:11.997714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.655 [2024-07-15 08:00:11.997737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.655 [2024-07-15 08:00:11.997758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.997781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.997824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.997845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.997868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.997914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.997941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.997963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.997986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.998008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.998053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.998963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.998986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.656 [2024-07-15 08:00:11.999561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.999607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.999652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.999711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.999759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.656 [2024-07-15 08:00:11.999783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.656 [2024-07-15 08:00:11.999805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:11.999829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:11.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:11.999874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:11.999905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:11.999930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:11.999951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:11.999974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.657 [2024-07-15 08:00:12.000691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.000736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.000783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.000827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.000872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.000926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.000972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.000997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.657 [2024-07-15 08:00:12.001446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:27.657 [2024-07-15 08:00:12.001510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:27.657 [2024-07-15 08:00:12.001530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125304 len:8 PRP1 0x0 PRP2 0x0 00:33:27.657 [2024-07-15 08:00:12.001550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.657 [2024-07-15 08:00:12.001828] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:33:27.657 [2024-07-15 08:00:12.001861] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:27.657 [2024-07-15 08:00:12.001892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.657 [2024-07-15 08:00:12.005695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.657 [2024-07-15 08:00:12.005769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:27.657 [2024-07-15 08:00:12.091253] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:27.657 00:33:27.657 Latency(us) 00:33:27.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.657 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:27.657 Verification LBA range: start 0x0 length 0x4000 00:33:27.657 NVMe0n1 : 15.01 6035.21 23.58 808.49 0.00 18669.51 1092.27 22427.88 00:33:27.657 =================================================================================================================== 00:33:27.657 Total : 6035.21 23.58 808.49 0.00 18669.51 1092.27 22427.88 00:33:27.657 Received shutdown signal, test time was about 15.000000 seconds 00:33:27.657 00:33:27.658 Latency(us) 00:33:27.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.658 =================================================================================================================== 00:33:27.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1208760 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1208760 /var/tmp/bdevperf.sock 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1208760 ']' 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:27.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:27.658 08:00:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:28.617 08:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.617 08:00:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:28.617 08:00:19 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:28.875 [2024-07-15 08:00:19.986534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:28.875 08:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:29.131 [2024-07-15 08:00:20.251503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:29.131 08:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:29.388 NVMe0n1 00:33:29.647 08:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:29.904 00:33:29.904 08:00:20 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:30.160 00:33:30.160 08:00:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:30.160 08:00:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:30.417 08:00:21 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:30.675 08:00:21 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:33.956 08:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:33.956 08:00:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:33.956 08:00:25 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1209551 00:33:33.956 08:00:25 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:33.956 08:00:25 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1209551 00:33:35.327 0 00:33:35.327 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:35.327 [2024-07-15 08:00:18.842176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:35.327 [2024-07-15 08:00:18.842328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208760 ] 00:33:35.327 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.327 [2024-07-15 08:00:18.973524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.327 [2024-07-15 08:00:19.205872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.327 [2024-07-15 08:00:21.732263] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:35.327 [2024-07-15 08:00:21.732412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.327 [2024-07-15 08:00:21.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.327 [2024-07-15 08:00:21.732474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.327 [2024-07-15 08:00:21.732496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.327 [2024-07-15 08:00:21.732517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.327 [2024-07-15 08:00:21.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.327 [2024-07-15 08:00:21.732558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.327 [2024-07-15 08:00:21.732579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.327 [2024-07-15 08:00:21.732599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.327 [2024-07-15 08:00:21.732696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.327 [2024-07-15 08:00:21.732749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:35.327 [2024-07-15 08:00:21.743155] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:35.327 Running I/O for 1 seconds... 00:33:35.327 00:33:35.327 Latency(us) 00:33:35.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.327 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:35.327 Verification LBA range: start 0x0 length 0x4000 00:33:35.327 NVMe0n1 : 1.01 6147.03 24.01 0.00 0.00 20737.82 3034.07 17379.18 00:33:35.327 =================================================================================================================== 00:33:35.327 Total : 6147.03 24.01 0.00 0.00 20737.82 3034.07 17379.18 00:33:35.327 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:35.327 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:35.327 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.590 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:35.590 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:35.847 08:00:26 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.106 08:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1208760 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1208760 ']' 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1208760 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1208760 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1208760' 00:33:39.387 killing process with pid 1208760 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1208760 00:33:39.387 08:00:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1208760 00:33:40.323 08:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:40.324 08:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.581 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.581 rmmod nvme_tcp 00:33:40.581 rmmod nvme_fabrics 00:33:40.582 rmmod nvme_keyring 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1205733 ']' 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1205733 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1205733 ']' 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1205733 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.582 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1205733 00:33:40.840 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:40.840 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:40.840 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1205733' 00:33:40.840 killing process with pid 1205733 00:33:40.840 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1205733 00:33:40.840 08:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1205733 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.216 08:00:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.118 08:00:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:44.118 00:33:44.118 real 0m39.596s 00:33:44.118 user 2m18.871s 00:33:44.118 sys 0m6.175s 00:33:44.118 08:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.118 08:00:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.118 ************************************ 00:33:44.118 END TEST nvmf_failover 00:33:44.118 ************************************ 00:33:44.118 08:00:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:44.118 08:00:35 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:44.118 08:00:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:44.118 08:00:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.118 08:00:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.118 ************************************ 00:33:44.118 START TEST nvmf_host_discovery 00:33:44.118 ************************************ 00:33:44.118 08:00:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:44.376 * Looking for test storage... 00:33:44.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.376 08:00:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:44.377 08:00:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.377 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:44.377 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:44.377 08:00:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:44.377 08:00:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:46.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:46.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:46.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:46.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:46.301 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:46.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:33:46.302 00:33:46.302 --- 10.0.0.2 ping statistics --- 00:33:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.302 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:33:46.302 00:33:46.302 --- 10.0.0.1 ping statistics --- 00:33:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.302 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1212407 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1212407 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1212407 ']' 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.302 08:00:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.302 [2024-07-15 08:00:37.429972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:46.302 [2024-07-15 08:00:37.430115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.302 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.561 [2024-07-15 08:00:37.564765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.821 [2024-07-15 08:00:37.821343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.821 [2024-07-15 08:00:37.821429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.821 [2024-07-15 08:00:37.821457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.821 [2024-07-15 08:00:37.821483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.821 [2024-07-15 08:00:37.821504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.821 [2024-07-15 08:00:37.821556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.386 [2024-07-15 08:00:38.405127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.386 [2024-07-15 08:00:38.413376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.386 null0 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.386 null1 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.386 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1212556 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1212556 /tmp/host.sock 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1212556 ']' 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:47.387 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.387 08:00:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.387 [2024-07-15 08:00:38.534811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:47.387 [2024-07-15 08:00:38.534992] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212556 ] 00:33:47.646 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.646 [2024-07-15 08:00:38.678022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.905 [2024-07-15 08:00:38.910300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.471 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.472 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 [2024-07-15 08:00:39.749129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:48.731 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:33:48.732 08:00:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:49.301 [2024-07-15 08:00:40.487154] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:49.301 [2024-07-15 08:00:40.487199] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:49.301 [2024-07-15 08:00:40.487259] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:49.560 [2024-07-15 08:00:40.573543] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:49.560 [2024-07-15 08:00:40.678603] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:49.560 [2024-07-15 08:00:40.678650] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.818 08:00:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:49.818 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.078 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.338 [2024-07-15 08:00:41.370059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:50.338 [2024-07-15 08:00:41.370832] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:50.338 [2024-07-15 08:00:41.370913] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.338 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.339 [2024-07-15 08:00:41.498739] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:50.339 08:00:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:50.597 [2024-07-15 08:00:41.765342] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:50.597 [2024-07-15 08:00:41.765410] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:50.597 [2024-07-15 08:00:41.765427] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.535 [2024-07-15 08:00:42.607260] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:51.535 [2024-07-15 08:00:42.607326] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:51.535 [2024-07-15 08:00:42.610216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.535 id:0 cdw10:00000000 cdw11:00000000 00:33:51.535 [2024-07-15 08:00:42.610268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.535 [2024-07-15 08:00:42.610298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.535 [2024-07-15 08:00:42.610321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.535 [2024-07-15 08:00:42.610344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.535 [2024-07-15 08:00:42.610366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.535 [2024-07-15 08:00:42.610399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:51.535 [2024-07-15 08:00:42.610422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.535 [2024-07-15 08:00:42.610445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:51.535 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:51.536 [2024-07-15 08:00:42.620205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.536 [2024-07-15 08:00:42.630252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.630572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.630615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.630643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.630700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.630740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.630769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.630792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.630835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 [2024-07-15 08:00:42.640387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.640618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.640658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.640684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.640719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.640753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.640783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.640805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.640837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 [2024-07-15 08:00:42.650499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.650743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.650784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.650810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.650845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.650887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.650928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.650948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.650976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.536 [2024-07-15 08:00:42.661800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.662086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.662124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.662159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.662191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.662222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.662262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.662282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.662321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 [2024-07-15 08:00:42.671934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.672113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.672149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.672190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.672226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.672260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.672283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.672305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.672336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 [2024-07-15 08:00:42.682028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.682350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.682389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.682415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.682450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.682484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.682507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.682527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.682558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.536 [2024-07-15 08:00:42.692135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:51.536 [2024-07-15 08:00:42.692436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.536 [2024-07-15 08:00:42.692475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:51.536 [2024-07-15 08:00:42.692501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:51.536 [2024-07-15 08:00:42.692535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:51.536 [2024-07-15 08:00:42.692588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.536 [2024-07-15 08:00:42.692619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.536 [2024-07-15 08:00:42.692640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.536 [2024-07-15 08:00:42.692671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.536 [2024-07-15 08:00:42.693954] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:51.536 [2024-07-15 08:00:42.694001] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:51.536 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.537 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.795 08:00:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.174 [2024-07-15 08:00:43.987956] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:53.174 [2024-07-15 08:00:43.988006] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:53.174 [2024-07-15 08:00:43.988048] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:53.174 [2024-07-15 08:00:44.074329] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:53.174 [2024-07-15 08:00:44.386596] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:53.174 [2024-07-15 08:00:44.386682] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.174 request: 00:33:53.174 { 00:33:53.174 "name": "nvme", 00:33:53.174 "trtype": "tcp", 00:33:53.174 "traddr": "10.0.0.2", 00:33:53.174 "adrfam": "ipv4", 00:33:53.174 "trsvcid": "8009", 00:33:53.174 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:53.174 "wait_for_attach": true, 00:33:53.174 "method": "bdev_nvme_start_discovery", 00:33:53.174 "req_id": 1 00:33:53.174 } 00:33:53.174 Got JSON-RPC error response 00:33:53.174 response: 00:33:53.174 { 00:33:53.174 "code": -17, 00:33:53.174 "message": "File exists" 00:33:53.174 } 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:53.174 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.432 request: 00:33:53.432 { 00:33:53.432 "name": "nvme_second", 00:33:53.432 "trtype": "tcp", 00:33:53.432 "traddr": "10.0.0.2", 00:33:53.432 "adrfam": "ipv4", 00:33:53.432 "trsvcid": "8009", 00:33:53.432 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:53.432 "wait_for_attach": true, 00:33:53.432 "method": "bdev_nvme_start_discovery", 00:33:53.432 "req_id": 1 00:33:53.432 } 00:33:53.432 Got JSON-RPC error response 00:33:53.432 response: 00:33:53.432 { 00:33:53.432 "code": -17, 00:33:53.432 "message": "File exists" 00:33:53.432 } 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.432 08:00:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.805 [2024-07-15 08:00:45.606709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.805 [2024-07-15 08:00:45.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:33:54.805 [2024-07-15 08:00:45.606848] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:54.805 [2024-07-15 08:00:45.606874] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:54.805 [2024-07-15 08:00:45.606906] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:55.741 [2024-07-15 08:00:46.609239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.741 [2024-07-15 08:00:46.609288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:33:55.741 [2024-07-15 08:00:46.609356] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:55.741 [2024-07-15 08:00:46.609378] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:55.741 [2024-07-15 08:00:46.609397] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:56.677 [2024-07-15 08:00:47.611273] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:56.677 request: 00:33:56.677 { 00:33:56.677 "name": "nvme_second", 00:33:56.677 "trtype": "tcp", 00:33:56.677 "traddr": "10.0.0.2", 00:33:56.677 "adrfam": "ipv4", 00:33:56.677 "trsvcid": "8010", 00:33:56.677 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:56.677 "wait_for_attach": false, 00:33:56.677 "attach_timeout_ms": 3000, 00:33:56.677 "method": "bdev_nvme_start_discovery", 00:33:56.677 "req_id": 1 00:33:56.677 } 00:33:56.677 Got JSON-RPC error response 00:33:56.677 response: 00:33:56.677 { 00:33:56.677 "code": -110, 00:33:56.677 "message": "Connection timed out" 00:33:56.677 } 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1212556 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.677 rmmod nvme_tcp 00:33:56.677 rmmod nvme_fabrics 00:33:56.677 rmmod nvme_keyring 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1212407 ']' 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1212407 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1212407 ']' 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1212407 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1212407 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1212407' 00:33:56.677 killing process with pid 1212407 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1212407 00:33:56.677 08:00:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1212407 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.051 08:00:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.957 08:00:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:59.957 00:33:59.957 real 0m15.801s 00:33:59.957 user 0m23.800s 00:33:59.957 sys 0m3.041s 00:33:59.957 08:00:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:59.957 08:00:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.957 ************************************ 00:33:59.957 END TEST nvmf_host_discovery 00:33:59.957 ************************************ 00:33:59.957 08:00:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:59.957 08:00:51 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:59.957 08:00:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:59.957 08:00:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.957 08:00:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.957 ************************************ 00:33:59.957 START TEST nvmf_host_multipath_status 00:33:59.957 ************************************ 00:33:59.957 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:00.215 * Looking for test storage... 00:34:00.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.215 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:00.216 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.157 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:02.157 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:02.157 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:02.157 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:02.157 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:02.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:02.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:02.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:02.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:02.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:34:02.158 00:34:02.158 --- 10.0.0.2 ping statistics --- 00:34:02.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.158 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:34:02.158 00:34:02.158 --- 10.0.0.1 ping statistics --- 00:34:02.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.158 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1215840 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1215840 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1215840 ']' 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:02.158 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.158 [2024-07-15 08:00:53.375293] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:02.158 [2024-07-15 08:00:53.375436] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.416 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.416 [2024-07-15 08:00:53.511829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:02.675 [2024-07-15 08:00:53.769245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.675 [2024-07-15 08:00:53.769330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.675 [2024-07-15 08:00:53.769365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.675 [2024-07-15 08:00:53.769386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.675 [2024-07-15 08:00:53.769408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.675 [2024-07-15 08:00:53.769526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.675 [2024-07-15 08:00:53.769536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1215840 00:34:03.240 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:03.497 [2024-07-15 08:00:54.566270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.497 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:03.753 Malloc0 00:34:03.753 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:04.320 08:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:04.320 08:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.577 [2024-07-15 08:00:55.771764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.577 08:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:04.834 [2024-07-15 08:00:56.060497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1216144 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1216144 /var/tmp/bdevperf.sock 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1216144 ']' 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:05.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:05.093 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:06.026 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.026 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:06.026 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:06.284 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:06.851 Nvme0n1 00:34:06.851 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:07.109 Nvme0n1 00:34:07.109 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:07.109 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:09.638 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:09.638 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:09.638 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:09.894 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:10.830 08:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:10.830 08:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:10.830 08:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.830 08:01:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:11.087 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.087 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:11.087 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.087 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:11.344 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:11.344 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:11.344 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.344 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:11.602 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.602 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:11.602 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.602 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:11.860 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.860 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:11.860 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.860 08:01:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:12.116 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.116 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:12.116 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.116 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:12.373 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.373 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:12.373 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:12.629 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:12.887 08:01:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:13.819 08:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:13.819 08:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:13.819 08:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.819 08:01:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:14.076 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:14.076 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:14.076 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.076 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:14.333 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.333 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:14.333 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.333 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:14.591 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.591 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:14.591 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.591 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:14.848 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.848 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:14.848 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.848 08:01:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:15.105 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.105 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:15.105 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.105 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:15.369 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.369 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:15.369 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:15.657 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:15.915 08:01:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:16.847 08:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:16.847 08:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:16.847 08:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.847 08:01:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.105 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.105 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:17.105 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.105 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:17.363 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:17.363 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:17.363 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.363 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:17.620 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.620 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:17.620 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.620 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:17.878 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.878 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:17.878 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.878 08:01:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:18.138 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.138 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:18.138 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.138 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:18.394 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.394 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:18.394 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:18.652 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:18.909 08:01:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:19.840 08:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:19.840 08:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:19.840 08:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.840 08:01:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:20.097 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.097 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:20.097 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.097 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:20.354 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:20.354 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:20.354 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.354 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:20.612 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.612 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:20.612 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.612 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:20.869 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.870 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:20.870 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.870 08:01:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:21.128 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.128 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:21.128 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.128 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:21.385 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.385 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:21.385 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:21.643 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:21.901 08:01:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:22.831 08:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:22.831 08:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:22.831 08:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.831 08:01:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:23.089 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.089 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:23.089 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.089 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:23.347 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.347 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:23.347 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.347 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:23.604 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.604 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:23.604 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.604 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:23.862 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.862 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:23.862 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.862 08:01:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:24.120 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.120 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:24.120 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.120 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.379 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.379 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:24.379 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:24.636 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:24.894 08:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:25.825 08:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:25.825 08:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:25.825 08:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.825 08:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.083 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.083 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:26.083 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.083 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:26.342 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.342 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:26.342 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.342 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:26.600 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.600 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:26.600 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.600 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:26.857 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.857 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:26.857 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.857 08:01:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.115 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.115 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:27.115 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.115 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:27.371 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.371 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:27.627 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:27.627 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:27.884 08:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:28.140 08:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:29.131 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:29.131 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:29.131 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.131 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:29.388 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.388 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:29.388 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.388 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:29.645 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.645 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:29.645 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.645 08:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:29.903 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.903 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:29.903 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.903 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:30.160 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.160 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:30.160 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.160 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:30.417 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.417 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:30.417 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.417 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:30.675 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.675 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:30.675 08:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:30.932 08:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:31.190 08:01:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:32.122 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:32.122 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:32.122 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.122 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.379 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.379 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:32.379 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.379 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.636 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.636 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.636 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.636 08:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:32.894 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.894 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:32.894 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.894 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:33.152 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.152 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:33.152 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.152 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:33.409 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.409 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:33.409 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.409 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:33.666 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.666 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:33.666 08:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:33.923 08:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:34.179 08:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:35.111 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:35.111 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:35.111 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.111 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:35.369 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.369 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:35.369 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.369 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:35.626 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.626 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:35.626 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.626 08:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:35.884 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.884 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:35.884 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.884 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.142 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.142 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:36.142 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.142 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:36.400 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.400 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:36.400 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.400 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:36.658 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.658 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:36.658 08:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:36.916 08:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:37.173 08:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.557 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:38.815 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.815 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:38.815 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.815 08:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.073 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.073 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.073 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.073 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:39.331 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.331 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:39.331 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.331 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:39.589 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.589 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:39.589 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.589 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1216144 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1216144 ']' 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1216144 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1216144 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1216144' 00:34:39.847 killing process with pid 1216144 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1216144 00:34:39.847 08:01:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1216144 00:34:40.414 Connection closed with partial response: 00:34:40.414 00:34:40.414 00:34:40.687 08:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1216144 00:34:40.687 08:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:40.687 [2024-07-15 08:00:56.159295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:40.687 [2024-07-15 08:00:56.159444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216144 ] 00:34:40.687 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.687 [2024-07-15 08:00:56.283936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.687 [2024-07-15 08:00:56.513826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.687 Running I/O for 90 seconds... 00:34:40.687 [2024-07-15 08:01:12.691098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.691687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.691711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.692971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.692996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.693963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.693990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.694033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.694058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.694092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.694117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.687 [2024-07-15 08:01:12.694179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.687 [2024-07-15 08:01:12.694204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.694962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.695062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.695941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.695967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.688 [2024-07-15 08:01:12.696470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.696933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.696960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.697914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.697961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.698971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.698999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.688 [2024-07-15 08:01:12.699490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.688 [2024-07-15 08:01:12.699523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.699549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.699583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.699614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.699668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.699706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.699754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.699795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.699834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.699860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.699919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.699950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.699985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.700952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.700977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.701036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.689 [2024-07-15 08:01:12.701095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.701521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.701546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.702515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.702559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.702622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.702713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.702743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.702779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.702804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.702840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.702865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.702923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.702966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.703963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.703999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.689 [2024-07-15 08:01:12.704525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.689 [2024-07-15 08:01:12.704587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.704624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.704673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.704710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.704762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.704789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.704823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.704854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.704917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.704946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.704981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.705006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.705066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.705125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.705200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.705259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.705317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.705961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.705986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.690 [2024-07-15 08:01:12.706722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.706961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.706986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.707021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.707046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.707080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.707105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.707139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.707170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.708938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.708973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.709956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.690 [2024-07-15 08:01:12.709998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.690 [2024-07-15 08:01:12.710037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.710944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.710982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.711051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.711110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.711185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.711244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.711321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.711381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.711438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.711509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.711565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.711621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.711655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.711678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.712730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.712765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.712823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.712856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.712904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.712945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.713971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.713999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.714937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.714993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.691 [2024-07-15 08:01:12.715583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.715640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.715696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.691 [2024-07-15 08:01:12.715728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.691 [2024-07-15 08:01:12.715752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.715783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.715807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.715839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.715889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.715943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.715970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.716529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.716943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.716968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.692 [2024-07-15 08:01:12.717028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.717093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.717155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.717228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.717285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.717341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.717375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.717398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.718945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.718970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.719946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.719990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.692 [2024-07-15 08:01:12.720743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.692 [2024-07-15 08:01:12.720776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.720803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.720837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.720884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.720948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.721685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.721741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.721797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.721900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.721954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.721980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.722017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.722046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.723948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.723984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.724944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.724979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.693 [2024-07-15 08:01:12.725949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.725994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.693 [2024-07-15 08:01:12.726574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.693 [2024-07-15 08:01:12.726607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.726634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.726667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.726723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.726746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.726778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.726801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.726833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.726872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.726920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.726962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.726998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.694 [2024-07-15 08:01:12.727464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.727520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.727575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.727645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.727711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.727755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.727779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.728770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.728814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.728859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.728894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.728931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.728963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.728999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.729943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.729970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.730947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.730982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.694 [2024-07-15 08:01:12.731935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.694 [2024-07-15 08:01:12.731962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.731998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.732024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.732084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.732143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.732218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.732353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.732426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.732459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.732483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.733516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.733560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.733617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.733645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.733697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.733731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.733769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.733794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.733845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.733902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.733943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.733970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.734961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.734995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.735960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.735992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.736448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.736963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.736998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.695 [2024-07-15 08:01:12.737414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.737470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.737528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.695 [2024-07-15 08:01:12.737600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.695 [2024-07-15 08:01:12.737633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.737656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.737688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.737711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.737743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.737766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.737799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.737822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.737854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.737901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.737941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.737966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.738000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.738029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.738064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.738089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.738124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.738152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.739940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.740960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.740999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.741960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.741985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.696 [2024-07-15 08:01:12.742483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.742540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.742596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.742645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.742670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.743971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.696 [2024-07-15 08:01:12.744767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.696 [2024-07-15 08:01:12.744790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.744827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.744850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.744913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.744955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.744997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.745954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.745979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.746044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.746107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.746171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.746953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.746992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.747017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.747081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.747144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:12.747676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.747739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.747799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.747837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.747862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:12.748110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:12.748139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.330766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.330855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.330939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.330969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.331954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.331993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:28.332467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:28.332528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:28.332588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.697 [2024-07-15 08:01:28.332664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.332938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.332974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.333011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.333062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.333089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.333125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.333152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.336624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.336662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.336708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.336735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.336772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.336796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.336831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.336897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.336939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.336965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.337002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.337027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.337064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.337089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.337135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.337161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.337213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.697 [2024-07-15 08:01:28.337238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.697 [2024-07-15 08:01:28.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.337815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.337887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.337958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.337994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.338019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.338082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.338143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.338219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.338281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.338788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.338905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.338967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.339942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.339967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.340226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.340287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.340906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.340947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.340973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.341792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.341851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.341940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.341976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.342000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.342037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.342063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.342815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.342858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.342915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.342942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.342978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.343566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.343624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.343682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.343716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.343741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.344961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.344986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.345021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.345045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.345080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.698 [2024-07-15 08:01:28.345105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.345140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.345165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.698 [2024-07-15 08:01:28.345216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.698 [2024-07-15 08:01:28.345240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.345302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.345408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.345934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.345969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.345994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.346030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.346055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.346678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.346710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.346752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.346793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.346830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.346855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.346916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.346944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.346982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.347008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.347049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.347111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.347136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.347172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.347198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.350840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.350911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.350948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.350974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.351040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.351103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.351177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.351241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.351302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.351363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.351428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.351465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.351491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.355790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.355844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.355926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.355956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.356020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.356083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.356151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.356825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.356861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.357572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.357633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.357693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.357752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.357813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.357898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.357945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.357972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.358094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.358160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.358249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.358318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.358735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.358771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.358796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.360668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.360730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.360811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.360931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.360969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.360996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.361032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.361062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.361099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.361124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.361161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.361196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.361234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.361259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.362242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.362325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.362385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.699 [2024-07-15 08:01:28.362458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.362539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.699 [2024-07-15 08:01:28.362601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.699 [2024-07-15 08:01:28.362637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.362662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.362697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.362722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.362772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.362797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.362833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.362923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.362954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.362990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.363570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.363957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.363983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.364043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.364077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.364102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.364137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.364162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.364223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.364248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.365715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.365776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.365837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.365920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.365992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.366033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.368675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.368724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.368767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.368793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.368828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.368867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.368917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.368943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.368982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.369800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.369946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.369980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.370006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.370041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.370067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.370101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.370126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.370162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.370210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.370245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.370281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.370318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.370342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.370378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.370402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.371869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.371912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.371961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.372000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.372462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.372520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.372554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.372579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.374869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.374913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.374963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.374996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.375092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.375175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.375637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.375811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.375894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.375933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.700 [2024-07-15 08:01:28.375964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.700 [2024-07-15 08:01:28.376001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.700 [2024-07-15 08:01:28.376027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.376808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.376935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.376964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.377000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.377024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.377059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.377084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.379309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.379396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.379815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.379841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.380885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.380919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.380966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.380993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.381390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.381452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.381516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.381575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.381888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.381958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.381988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.382751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.382939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.382965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.383085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.383145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.383221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.383283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.383359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.383417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.383543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.383602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.383637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.383661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.386484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.386573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.386679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.386746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.386813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.386885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.386938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.386965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.387782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.387994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.388042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.388083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.388138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.388167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.388204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.388236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.388291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.388320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.389595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.389629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.389702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.389731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.389767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.701 [2024-07-15 08:01:28.389793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.389832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.701 [2024-07-15 08:01:28.389873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.701 [2024-07-15 08:01:28.389931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.389970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.390035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.390095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.390242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.390302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.702 [2024-07-15 08:01:28.390504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.702 [2024-07-15 08:01:28.390865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.702 [2024-07-15 08:01:28.390901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.702 Received shutdown signal, test time was about 32.429564 seconds 00:34:40.702 00:34:40.702 Latency(us) 00:34:40.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.702 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:40.702 Verification LBA range: start 0x0 length 0x4000 00:34:40.702 Nvme0n1 : 32.43 5662.27 22.12 0.00 0.00 22567.03 1086.20 4076242.11 00:34:40.702 =================================================================================================================== 00:34:40.702 Total : 5662.27 22.12 0.00 0.00 22567.03 1086.20 4076242.11 00:34:40.702 08:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:40.978 rmmod nvme_tcp 00:34:40.978 rmmod nvme_fabrics 00:34:40.978 rmmod nvme_keyring 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1215840 ']' 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1215840 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1215840 ']' 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1215840 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1215840 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1215840' 00:34:40.978 killing process with pid 1215840 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1215840 00:34:40.978 08:01:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1215840 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.878 08:01:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.781 08:01:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:44.781 00:34:44.781 real 0m44.571s 00:34:44.781 user 2m12.580s 00:34:44.781 sys 0m10.086s 00:34:44.781 08:01:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:44.781 08:01:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.781 ************************************ 00:34:44.781 END TEST nvmf_host_multipath_status 00:34:44.781 ************************************ 00:34:44.781 08:01:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:44.781 08:01:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:44.781 08:01:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:44.781 08:01:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.781 08:01:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.781 ************************************ 00:34:44.781 START TEST nvmf_discovery_remove_ifc 00:34:44.781 ************************************ 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:44.781 * Looking for test storage... 00:34:44.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.781 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:44.782 08:01:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:46.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:46.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:46.685 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:46.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:46.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:46.686 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:46.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:34:46.944 00:34:46.944 --- 10.0.0.2 ping statistics --- 00:34:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.944 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:46.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:34:46.944 00:34:46.944 --- 10.0.0.1 ping statistics --- 00:34:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.944 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:46.944 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:46.945 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.945 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:46.945 08:01:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1222594 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1222594 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1222594 ']' 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:46.945 08:01:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.945 [2024-07-15 08:01:38.094709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:46.945 [2024-07-15 08:01:38.094854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.204 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.204 [2024-07-15 08:01:38.231734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.464 [2024-07-15 08:01:38.480780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.464 [2024-07-15 08:01:38.480861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.464 [2024-07-15 08:01:38.480899] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.464 [2024-07-15 08:01:38.480926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.464 [2024-07-15 08:01:38.480958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.464 [2024-07-15 08:01:38.481019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 [2024-07-15 08:01:39.087237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.030 [2024-07-15 08:01:39.095438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:48.030 null0 00:34:48.030 [2024-07-15 08:01:39.127336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1222744 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1222744 /tmp/host.sock 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1222744 ']' 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:48.030 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:48.030 08:01:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 [2024-07-15 08:01:39.232367] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:48.030 [2024-07-15 08:01:39.232512] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222744 ] 00:34:48.288 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.288 [2024-07-15 08:01:39.370333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.545 [2024-07-15 08:01:39.624627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.110 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.368 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.368 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:49.368 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.368 08:01:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.744 [2024-07-15 08:01:41.600144] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:50.744 [2024-07-15 08:01:41.600212] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:50.744 [2024-07-15 08:01:41.600274] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:50.744 [2024-07-15 08:01:41.686562] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:50.744 [2024-07-15 08:01:41.791220] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:50.744 [2024-07-15 08:01:41.791315] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:50.744 [2024-07-15 08:01:41.791393] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:50.744 [2024-07-15 08:01:41.791436] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:50.744 [2024-07-15 08:01:41.791489] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.744 [2024-07-15 08:01:41.797805] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:50.744 08:01:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:52.123 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.124 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.124 08:01:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.060 08:01:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.060 08:01:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:53.060 08:01:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:53.995 08:01:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.932 08:01:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:56.304 08:01:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.304 [2024-07-15 08:01:47.233172] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:56.304 [2024-07-15 08:01:47.233283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.304 [2024-07-15 08:01:47.233318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.304 [2024-07-15 08:01:47.233347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.305 [2024-07-15 08:01:47.233371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.305 [2024-07-15 08:01:47.233395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.305 [2024-07-15 08:01:47.233418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.305 [2024-07-15 08:01:47.233441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.305 [2024-07-15 08:01:47.233464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.305 [2024-07-15 08:01:47.233488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.305 [2024-07-15 08:01:47.233511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.305 [2024-07-15 08:01:47.233532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:56.305 [2024-07-15 08:01:47.243186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:56.305 [2024-07-15 08:01:47.253243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.240 [2024-07-15 08:01:48.306933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:57.240 [2024-07-15 08:01:48.307017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:34:57.240 [2024-07-15 08:01:48.307056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:57.240 [2024-07-15 08:01:48.307120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:57.240 [2024-07-15 08:01:48.307805] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:57.240 [2024-07-15 08:01:48.307854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:57.240 [2024-07-15 08:01:48.307907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:57.240 [2024-07-15 08:01:48.307935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:57.240 [2024-07-15 08:01:48.308006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:57.240 [2024-07-15 08:01:48.308034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:57.240 08:01:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:58.206 [2024-07-15 08:01:49.310567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:58.206 [2024-07-15 08:01:49.310612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:58.206 [2024-07-15 08:01:49.310636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:58.206 [2024-07-15 08:01:49.310657] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:58.206 [2024-07-15 08:01:49.310694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:58.206 [2024-07-15 08:01:49.310768] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:58.206 [2024-07-15 08:01:49.310834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.206 [2024-07-15 08:01:49.310868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.206 [2024-07-15 08:01:49.310917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.206 [2024-07-15 08:01:49.310957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.206 [2024-07-15 08:01:49.310979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.206 [2024-07-15 08:01:49.310999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.206 [2024-07-15 08:01:49.311021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.206 [2024-07-15 08:01:49.311041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.206 [2024-07-15 08:01:49.311063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.206 [2024-07-15 08:01:49.311083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.206 [2024-07-15 08:01:49.311102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:58.206 [2024-07-15 08:01:49.311259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:58.206 [2024-07-15 08:01:49.312385] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:58.206 [2024-07-15 08:01:49.312421] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.206 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.463 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:58.463 08:01:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:59.402 08:01:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.337 [2024-07-15 08:01:51.326187] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:00.337 [2024-07-15 08:01:51.326237] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:00.337 [2024-07-15 08:01:51.326273] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:00.337 [2024-07-15 08:01:51.452762] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:00.337 08:01:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.337 [2024-07-15 08:01:51.558352] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:00.337 [2024-07-15 08:01:51.558431] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:00.337 [2024-07-15 08:01:51.558524] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:00.337 [2024-07-15 08:01:51.558567] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:00.337 [2024-07-15 08:01:51.558596] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:00.337 [2024-07-15 08:01:51.565073] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1222744 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1222744 ']' 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1222744 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222744 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222744' 00:35:01.715 killing process with pid 1222744 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1222744 00:35:01.715 08:01:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1222744 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:02.646 rmmod nvme_tcp 00:35:02.646 rmmod nvme_fabrics 00:35:02.646 rmmod nvme_keyring 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1222594 ']' 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1222594 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1222594 ']' 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1222594 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222594 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222594' 00:35:02.646 killing process with pid 1222594 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1222594 00:35:02.646 08:01:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1222594 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.026 08:01:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.931 08:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:05.931 00:35:05.931 real 0m21.284s 00:35:05.931 user 0m31.205s 00:35:05.931 sys 0m3.317s 00:35:05.931 08:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:05.931 08:01:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.931 ************************************ 00:35:05.931 END TEST nvmf_discovery_remove_ifc 00:35:05.931 ************************************ 00:35:05.931 08:01:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:05.931 08:01:57 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:05.931 08:01:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:05.931 08:01:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:05.931 08:01:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.931 ************************************ 00:35:05.931 START TEST nvmf_identify_kernel_target 00:35:05.931 ************************************ 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:05.931 * Looking for test storage... 00:35:05.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.931 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.190 08:01:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:08.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:08.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:08.088 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:08.088 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.088 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:08.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:35:08.089 00:35:08.089 --- 10.0.0.2 ping statistics --- 00:35:08.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.089 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:35:08.089 00:35:08.089 --- 10.0.0.1 ping statistics --- 00:35:08.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.089 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:08.089 08:01:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:09.021 Waiting for block devices as requested 00:35:09.021 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:09.281 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:09.281 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.539 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.539 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:09.539 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:09.539 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:09.797 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:09.797 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:09.797 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:09.797 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:10.057 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:10.057 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:10.057 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:10.057 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:10.316 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:10.316 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:10.316 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:10.575 No valid GPT data, bailing 00:35:10.575 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:10.575 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:10.575 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:10.575 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:10.576 00:35:10.576 Discovery Log Number of Records 2, Generation counter 2 00:35:10.576 =====Discovery Log Entry 0====== 00:35:10.576 trtype: tcp 00:35:10.576 adrfam: ipv4 00:35:10.576 subtype: current discovery subsystem 00:35:10.576 treq: not specified, sq flow control disable supported 00:35:10.576 portid: 1 00:35:10.576 trsvcid: 4420 00:35:10.576 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:10.576 traddr: 10.0.0.1 00:35:10.576 eflags: none 00:35:10.576 sectype: none 00:35:10.576 =====Discovery Log Entry 1====== 00:35:10.576 trtype: tcp 00:35:10.576 adrfam: ipv4 00:35:10.576 subtype: nvme subsystem 00:35:10.576 treq: not specified, sq flow control disable supported 00:35:10.576 portid: 1 00:35:10.576 trsvcid: 4420 00:35:10.576 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:10.576 traddr: 10.0.0.1 00:35:10.576 eflags: none 00:35:10.576 sectype: none 00:35:10.576 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:10.576 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:10.576 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.836 ===================================================== 00:35:10.836 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:10.836 ===================================================== 00:35:10.836 Controller Capabilities/Features 00:35:10.836 ================================ 00:35:10.836 Vendor ID: 0000 00:35:10.836 Subsystem Vendor ID: 0000 00:35:10.836 Serial Number: 7d128a18bf5ff407e64e 00:35:10.836 Model Number: Linux 00:35:10.836 Firmware Version: 6.7.0-68 00:35:10.836 Recommended Arb Burst: 0 00:35:10.836 IEEE OUI Identifier: 00 00 00 00:35:10.836 Multi-path I/O 00:35:10.836 May have multiple subsystem ports: No 00:35:10.836 May have multiple controllers: No 00:35:10.836 Associated with SR-IOV VF: No 00:35:10.836 Max Data Transfer Size: Unlimited 00:35:10.836 Max Number of Namespaces: 0 00:35:10.836 Max Number of I/O Queues: 1024 00:35:10.836 NVMe Specification Version (VS): 1.3 00:35:10.836 NVMe Specification Version (Identify): 1.3 00:35:10.836 Maximum Queue Entries: 1024 00:35:10.836 Contiguous Queues Required: No 00:35:10.836 Arbitration Mechanisms Supported 00:35:10.836 Weighted Round Robin: Not Supported 00:35:10.836 Vendor Specific: Not Supported 00:35:10.836 Reset Timeout: 7500 ms 00:35:10.836 Doorbell Stride: 4 bytes 00:35:10.836 NVM Subsystem Reset: Not Supported 00:35:10.836 Command Sets Supported 00:35:10.836 NVM Command Set: Supported 00:35:10.836 Boot Partition: Not Supported 00:35:10.836 Memory Page Size Minimum: 4096 bytes 00:35:10.836 Memory Page Size Maximum: 4096 bytes 00:35:10.836 Persistent Memory Region: Not Supported 00:35:10.836 Optional Asynchronous Events Supported 00:35:10.836 Namespace Attribute Notices: Not Supported 00:35:10.836 Firmware Activation Notices: Not Supported 00:35:10.836 ANA Change Notices: Not Supported 00:35:10.836 PLE Aggregate Log Change Notices: Not Supported 00:35:10.836 LBA Status Info Alert Notices: Not Supported 00:35:10.836 EGE Aggregate Log Change Notices: Not Supported 00:35:10.836 Normal NVM Subsystem Shutdown event: Not Supported 00:35:10.836 Zone Descriptor Change Notices: Not Supported 00:35:10.836 Discovery Log Change Notices: Supported 00:35:10.836 Controller Attributes 00:35:10.836 128-bit Host Identifier: Not Supported 00:35:10.836 Non-Operational Permissive Mode: Not Supported 00:35:10.836 NVM Sets: Not Supported 00:35:10.836 Read Recovery Levels: Not Supported 00:35:10.836 Endurance Groups: Not Supported 00:35:10.836 Predictable Latency Mode: Not Supported 00:35:10.836 Traffic Based Keep ALive: Not Supported 00:35:10.836 Namespace Granularity: Not Supported 00:35:10.836 SQ Associations: Not Supported 00:35:10.836 UUID List: Not Supported 00:35:10.836 Multi-Domain Subsystem: Not Supported 00:35:10.836 Fixed Capacity Management: Not Supported 00:35:10.836 Variable Capacity Management: Not Supported 00:35:10.836 Delete Endurance Group: Not Supported 00:35:10.836 Delete NVM Set: Not Supported 00:35:10.836 Extended LBA Formats Supported: Not Supported 00:35:10.836 Flexible Data Placement Supported: Not Supported 00:35:10.836 00:35:10.836 Controller Memory Buffer Support 00:35:10.836 ================================ 00:35:10.836 Supported: No 00:35:10.836 00:35:10.836 Persistent Memory Region Support 00:35:10.836 ================================ 00:35:10.836 Supported: No 00:35:10.836 00:35:10.836 Admin Command Set Attributes 00:35:10.836 ============================ 00:35:10.836 Security Send/Receive: Not Supported 00:35:10.836 Format NVM: Not Supported 00:35:10.836 Firmware Activate/Download: Not Supported 00:35:10.836 Namespace Management: Not Supported 00:35:10.836 Device Self-Test: Not Supported 00:35:10.836 Directives: Not Supported 00:35:10.836 NVMe-MI: Not Supported 00:35:10.836 Virtualization Management: Not Supported 00:35:10.836 Doorbell Buffer Config: Not Supported 00:35:10.836 Get LBA Status Capability: Not Supported 00:35:10.836 Command & Feature Lockdown Capability: Not Supported 00:35:10.836 Abort Command Limit: 1 00:35:10.836 Async Event Request Limit: 1 00:35:10.836 Number of Firmware Slots: N/A 00:35:10.836 Firmware Slot 1 Read-Only: N/A 00:35:10.836 Firmware Activation Without Reset: N/A 00:35:10.836 Multiple Update Detection Support: N/A 00:35:10.836 Firmware Update Granularity: No Information Provided 00:35:10.836 Per-Namespace SMART Log: No 00:35:10.836 Asymmetric Namespace Access Log Page: Not Supported 00:35:10.836 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:10.836 Command Effects Log Page: Not Supported 00:35:10.836 Get Log Page Extended Data: Supported 00:35:10.836 Telemetry Log Pages: Not Supported 00:35:10.836 Persistent Event Log Pages: Not Supported 00:35:10.836 Supported Log Pages Log Page: May Support 00:35:10.836 Commands Supported & Effects Log Page: Not Supported 00:35:10.836 Feature Identifiers & Effects Log Page:May Support 00:35:10.836 NVMe-MI Commands & Effects Log Page: May Support 00:35:10.836 Data Area 4 for Telemetry Log: Not Supported 00:35:10.836 Error Log Page Entries Supported: 1 00:35:10.836 Keep Alive: Not Supported 00:35:10.836 00:35:10.836 NVM Command Set Attributes 00:35:10.836 ========================== 00:35:10.836 Submission Queue Entry Size 00:35:10.836 Max: 1 00:35:10.836 Min: 1 00:35:10.836 Completion Queue Entry Size 00:35:10.836 Max: 1 00:35:10.836 Min: 1 00:35:10.836 Number of Namespaces: 0 00:35:10.836 Compare Command: Not Supported 00:35:10.836 Write Uncorrectable Command: Not Supported 00:35:10.836 Dataset Management Command: Not Supported 00:35:10.836 Write Zeroes Command: Not Supported 00:35:10.836 Set Features Save Field: Not Supported 00:35:10.836 Reservations: Not Supported 00:35:10.836 Timestamp: Not Supported 00:35:10.836 Copy: Not Supported 00:35:10.836 Volatile Write Cache: Not Present 00:35:10.836 Atomic Write Unit (Normal): 1 00:35:10.836 Atomic Write Unit (PFail): 1 00:35:10.836 Atomic Compare & Write Unit: 1 00:35:10.836 Fused Compare & Write: Not Supported 00:35:10.836 Scatter-Gather List 00:35:10.836 SGL Command Set: Supported 00:35:10.836 SGL Keyed: Not Supported 00:35:10.836 SGL Bit Bucket Descriptor: Not Supported 00:35:10.836 SGL Metadata Pointer: Not Supported 00:35:10.836 Oversized SGL: Not Supported 00:35:10.836 SGL Metadata Address: Not Supported 00:35:10.836 SGL Offset: Supported 00:35:10.836 Transport SGL Data Block: Not Supported 00:35:10.836 Replay Protected Memory Block: Not Supported 00:35:10.836 00:35:10.836 Firmware Slot Information 00:35:10.836 ========================= 00:35:10.836 Active slot: 0 00:35:10.836 00:35:10.836 00:35:10.836 Error Log 00:35:10.836 ========= 00:35:10.836 00:35:10.836 Active Namespaces 00:35:10.836 ================= 00:35:10.836 Discovery Log Page 00:35:10.836 ================== 00:35:10.836 Generation Counter: 2 00:35:10.836 Number of Records: 2 00:35:10.836 Record Format: 0 00:35:10.836 00:35:10.836 Discovery Log Entry 0 00:35:10.836 ---------------------- 00:35:10.836 Transport Type: 3 (TCP) 00:35:10.836 Address Family: 1 (IPv4) 00:35:10.836 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:10.836 Entry Flags: 00:35:10.836 Duplicate Returned Information: 0 00:35:10.836 Explicit Persistent Connection Support for Discovery: 0 00:35:10.836 Transport Requirements: 00:35:10.836 Secure Channel: Not Specified 00:35:10.836 Port ID: 1 (0x0001) 00:35:10.836 Controller ID: 65535 (0xffff) 00:35:10.836 Admin Max SQ Size: 32 00:35:10.836 Transport Service Identifier: 4420 00:35:10.836 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:10.836 Transport Address: 10.0.0.1 00:35:10.836 Discovery Log Entry 1 00:35:10.836 ---------------------- 00:35:10.836 Transport Type: 3 (TCP) 00:35:10.836 Address Family: 1 (IPv4) 00:35:10.836 Subsystem Type: 2 (NVM Subsystem) 00:35:10.836 Entry Flags: 00:35:10.836 Duplicate Returned Information: 0 00:35:10.836 Explicit Persistent Connection Support for Discovery: 0 00:35:10.836 Transport Requirements: 00:35:10.836 Secure Channel: Not Specified 00:35:10.836 Port ID: 1 (0x0001) 00:35:10.836 Controller ID: 65535 (0xffff) 00:35:10.836 Admin Max SQ Size: 32 00:35:10.836 Transport Service Identifier: 4420 00:35:10.836 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:10.836 Transport Address: 10.0.0.1 00:35:10.836 08:02:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.836 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.836 get_feature(0x01) failed 00:35:10.836 get_feature(0x02) failed 00:35:10.836 get_feature(0x04) failed 00:35:10.836 ===================================================== 00:35:10.836 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:10.836 ===================================================== 00:35:10.836 Controller Capabilities/Features 00:35:10.837 ================================ 00:35:10.837 Vendor ID: 0000 00:35:10.837 Subsystem Vendor ID: 0000 00:35:10.837 Serial Number: d51bd40e91c115ebbe2d 00:35:10.837 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:10.837 Firmware Version: 6.7.0-68 00:35:10.837 Recommended Arb Burst: 6 00:35:10.837 IEEE OUI Identifier: 00 00 00 00:35:10.837 Multi-path I/O 00:35:10.837 May have multiple subsystem ports: Yes 00:35:10.837 May have multiple controllers: Yes 00:35:10.837 Associated with SR-IOV VF: No 00:35:10.837 Max Data Transfer Size: Unlimited 00:35:10.837 Max Number of Namespaces: 1024 00:35:10.837 Max Number of I/O Queues: 128 00:35:10.837 NVMe Specification Version (VS): 1.3 00:35:10.837 NVMe Specification Version (Identify): 1.3 00:35:10.837 Maximum Queue Entries: 1024 00:35:10.837 Contiguous Queues Required: No 00:35:10.837 Arbitration Mechanisms Supported 00:35:10.837 Weighted Round Robin: Not Supported 00:35:10.837 Vendor Specific: Not Supported 00:35:10.837 Reset Timeout: 7500 ms 00:35:10.837 Doorbell Stride: 4 bytes 00:35:10.837 NVM Subsystem Reset: Not Supported 00:35:10.837 Command Sets Supported 00:35:10.837 NVM Command Set: Supported 00:35:10.837 Boot Partition: Not Supported 00:35:10.837 Memory Page Size Minimum: 4096 bytes 00:35:10.837 Memory Page Size Maximum: 4096 bytes 00:35:10.837 Persistent Memory Region: Not Supported 00:35:10.837 Optional Asynchronous Events Supported 00:35:10.837 Namespace Attribute Notices: Supported 00:35:10.837 Firmware Activation Notices: Not Supported 00:35:10.837 ANA Change Notices: Supported 00:35:10.837 PLE Aggregate Log Change Notices: Not Supported 00:35:10.837 LBA Status Info Alert Notices: Not Supported 00:35:10.837 EGE Aggregate Log Change Notices: Not Supported 00:35:10.837 Normal NVM Subsystem Shutdown event: Not Supported 00:35:10.837 Zone Descriptor Change Notices: Not Supported 00:35:10.837 Discovery Log Change Notices: Not Supported 00:35:10.837 Controller Attributes 00:35:10.837 128-bit Host Identifier: Supported 00:35:10.837 Non-Operational Permissive Mode: Not Supported 00:35:10.837 NVM Sets: Not Supported 00:35:10.837 Read Recovery Levels: Not Supported 00:35:10.837 Endurance Groups: Not Supported 00:35:10.837 Predictable Latency Mode: Not Supported 00:35:10.837 Traffic Based Keep ALive: Supported 00:35:10.837 Namespace Granularity: Not Supported 00:35:10.837 SQ Associations: Not Supported 00:35:10.837 UUID List: Not Supported 00:35:10.837 Multi-Domain Subsystem: Not Supported 00:35:10.837 Fixed Capacity Management: Not Supported 00:35:10.837 Variable Capacity Management: Not Supported 00:35:10.837 Delete Endurance Group: Not Supported 00:35:10.837 Delete NVM Set: Not Supported 00:35:10.837 Extended LBA Formats Supported: Not Supported 00:35:10.837 Flexible Data Placement Supported: Not Supported 00:35:10.837 00:35:10.837 Controller Memory Buffer Support 00:35:10.837 ================================ 00:35:10.837 Supported: No 00:35:10.837 00:35:10.837 Persistent Memory Region Support 00:35:10.837 ================================ 00:35:10.837 Supported: No 00:35:10.837 00:35:10.837 Admin Command Set Attributes 00:35:10.837 ============================ 00:35:10.837 Security Send/Receive: Not Supported 00:35:10.837 Format NVM: Not Supported 00:35:10.837 Firmware Activate/Download: Not Supported 00:35:10.837 Namespace Management: Not Supported 00:35:10.837 Device Self-Test: Not Supported 00:35:10.837 Directives: Not Supported 00:35:10.837 NVMe-MI: Not Supported 00:35:10.837 Virtualization Management: Not Supported 00:35:10.837 Doorbell Buffer Config: Not Supported 00:35:10.837 Get LBA Status Capability: Not Supported 00:35:10.837 Command & Feature Lockdown Capability: Not Supported 00:35:10.837 Abort Command Limit: 4 00:35:10.837 Async Event Request Limit: 4 00:35:10.837 Number of Firmware Slots: N/A 00:35:10.837 Firmware Slot 1 Read-Only: N/A 00:35:10.837 Firmware Activation Without Reset: N/A 00:35:10.837 Multiple Update Detection Support: N/A 00:35:10.837 Firmware Update Granularity: No Information Provided 00:35:10.837 Per-Namespace SMART Log: Yes 00:35:10.837 Asymmetric Namespace Access Log Page: Supported 00:35:10.837 ANA Transition Time : 10 sec 00:35:10.837 00:35:10.837 Asymmetric Namespace Access Capabilities 00:35:10.837 ANA Optimized State : Supported 00:35:10.837 ANA Non-Optimized State : Supported 00:35:10.837 ANA Inaccessible State : Supported 00:35:10.837 ANA Persistent Loss State : Supported 00:35:10.837 ANA Change State : Supported 00:35:10.837 ANAGRPID is not changed : No 00:35:10.837 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:10.837 00:35:10.837 ANA Group Identifier Maximum : 128 00:35:10.837 Number of ANA Group Identifiers : 128 00:35:10.837 Max Number of Allowed Namespaces : 1024 00:35:10.837 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:10.837 Command Effects Log Page: Supported 00:35:10.837 Get Log Page Extended Data: Supported 00:35:10.837 Telemetry Log Pages: Not Supported 00:35:10.837 Persistent Event Log Pages: Not Supported 00:35:10.837 Supported Log Pages Log Page: May Support 00:35:10.837 Commands Supported & Effects Log Page: Not Supported 00:35:10.837 Feature Identifiers & Effects Log Page:May Support 00:35:10.837 NVMe-MI Commands & Effects Log Page: May Support 00:35:10.837 Data Area 4 for Telemetry Log: Not Supported 00:35:10.837 Error Log Page Entries Supported: 128 00:35:10.837 Keep Alive: Supported 00:35:10.837 Keep Alive Granularity: 1000 ms 00:35:10.837 00:35:10.837 NVM Command Set Attributes 00:35:10.837 ========================== 00:35:10.837 Submission Queue Entry Size 00:35:10.837 Max: 64 00:35:10.837 Min: 64 00:35:10.837 Completion Queue Entry Size 00:35:10.837 Max: 16 00:35:10.837 Min: 16 00:35:10.837 Number of Namespaces: 1024 00:35:10.837 Compare Command: Not Supported 00:35:10.837 Write Uncorrectable Command: Not Supported 00:35:10.837 Dataset Management Command: Supported 00:35:10.837 Write Zeroes Command: Supported 00:35:10.837 Set Features Save Field: Not Supported 00:35:10.837 Reservations: Not Supported 00:35:10.837 Timestamp: Not Supported 00:35:10.837 Copy: Not Supported 00:35:10.837 Volatile Write Cache: Present 00:35:10.837 Atomic Write Unit (Normal): 1 00:35:10.837 Atomic Write Unit (PFail): 1 00:35:10.837 Atomic Compare & Write Unit: 1 00:35:10.837 Fused Compare & Write: Not Supported 00:35:10.837 Scatter-Gather List 00:35:10.837 SGL Command Set: Supported 00:35:10.837 SGL Keyed: Not Supported 00:35:10.837 SGL Bit Bucket Descriptor: Not Supported 00:35:10.837 SGL Metadata Pointer: Not Supported 00:35:10.837 Oversized SGL: Not Supported 00:35:10.837 SGL Metadata Address: Not Supported 00:35:10.837 SGL Offset: Supported 00:35:10.837 Transport SGL Data Block: Not Supported 00:35:10.837 Replay Protected Memory Block: Not Supported 00:35:10.837 00:35:10.837 Firmware Slot Information 00:35:10.837 ========================= 00:35:10.837 Active slot: 0 00:35:10.837 00:35:10.837 Asymmetric Namespace Access 00:35:10.837 =========================== 00:35:10.837 Change Count : 0 00:35:10.837 Number of ANA Group Descriptors : 1 00:35:10.837 ANA Group Descriptor : 0 00:35:10.837 ANA Group ID : 1 00:35:10.837 Number of NSID Values : 1 00:35:10.837 Change Count : 0 00:35:10.837 ANA State : 1 00:35:10.837 Namespace Identifier : 1 00:35:10.837 00:35:10.837 Commands Supported and Effects 00:35:10.837 ============================== 00:35:10.837 Admin Commands 00:35:10.837 -------------- 00:35:10.837 Get Log Page (02h): Supported 00:35:10.837 Identify (06h): Supported 00:35:10.837 Abort (08h): Supported 00:35:10.837 Set Features (09h): Supported 00:35:10.837 Get Features (0Ah): Supported 00:35:10.837 Asynchronous Event Request (0Ch): Supported 00:35:10.837 Keep Alive (18h): Supported 00:35:10.837 I/O Commands 00:35:10.837 ------------ 00:35:10.837 Flush (00h): Supported 00:35:10.837 Write (01h): Supported LBA-Change 00:35:10.837 Read (02h): Supported 00:35:10.837 Write Zeroes (08h): Supported LBA-Change 00:35:10.837 Dataset Management (09h): Supported 00:35:10.837 00:35:10.837 Error Log 00:35:10.837 ========= 00:35:10.837 Entry: 0 00:35:10.837 Error Count: 0x3 00:35:10.837 Submission Queue Id: 0x0 00:35:10.837 Command Id: 0x5 00:35:10.837 Phase Bit: 0 00:35:10.837 Status Code: 0x2 00:35:10.837 Status Code Type: 0x0 00:35:10.837 Do Not Retry: 1 00:35:10.837 Error Location: 0x28 00:35:10.837 LBA: 0x0 00:35:10.837 Namespace: 0x0 00:35:10.837 Vendor Log Page: 0x0 00:35:10.837 ----------- 00:35:10.837 Entry: 1 00:35:10.837 Error Count: 0x2 00:35:10.837 Submission Queue Id: 0x0 00:35:10.837 Command Id: 0x5 00:35:10.837 Phase Bit: 0 00:35:10.837 Status Code: 0x2 00:35:10.837 Status Code Type: 0x0 00:35:10.837 Do Not Retry: 1 00:35:10.837 Error Location: 0x28 00:35:10.837 LBA: 0x0 00:35:10.837 Namespace: 0x0 00:35:10.837 Vendor Log Page: 0x0 00:35:10.837 ----------- 00:35:10.837 Entry: 2 00:35:10.838 Error Count: 0x1 00:35:10.838 Submission Queue Id: 0x0 00:35:10.838 Command Id: 0x4 00:35:10.838 Phase Bit: 0 00:35:10.838 Status Code: 0x2 00:35:10.838 Status Code Type: 0x0 00:35:10.838 Do Not Retry: 1 00:35:10.838 Error Location: 0x28 00:35:10.838 LBA: 0x0 00:35:10.838 Namespace: 0x0 00:35:10.838 Vendor Log Page: 0x0 00:35:10.838 00:35:10.838 Number of Queues 00:35:10.838 ================ 00:35:10.838 Number of I/O Submission Queues: 128 00:35:10.838 Number of I/O Completion Queues: 128 00:35:10.838 00:35:10.838 ZNS Specific Controller Data 00:35:10.838 ============================ 00:35:10.838 Zone Append Size Limit: 0 00:35:10.838 00:35:10.838 00:35:10.838 Active Namespaces 00:35:10.838 ================= 00:35:10.838 get_feature(0x05) failed 00:35:10.838 Namespace ID:1 00:35:10.838 Command Set Identifier: NVM (00h) 00:35:10.838 Deallocate: Supported 00:35:10.838 Deallocated/Unwritten Error: Not Supported 00:35:10.838 Deallocated Read Value: Unknown 00:35:10.838 Deallocate in Write Zeroes: Not Supported 00:35:10.838 Deallocated Guard Field: 0xFFFF 00:35:10.838 Flush: Supported 00:35:10.838 Reservation: Not Supported 00:35:10.838 Namespace Sharing Capabilities: Multiple Controllers 00:35:10.838 Size (in LBAs): 1953525168 (931GiB) 00:35:10.838 Capacity (in LBAs): 1953525168 (931GiB) 00:35:10.838 Utilization (in LBAs): 1953525168 (931GiB) 00:35:10.838 UUID: 0b6887a0-3fcd-48ec-9fcc-e14dbdecedfd 00:35:10.838 Thin Provisioning: Not Supported 00:35:10.838 Per-NS Atomic Units: Yes 00:35:10.838 Atomic Boundary Size (Normal): 0 00:35:10.838 Atomic Boundary Size (PFail): 0 00:35:10.838 Atomic Boundary Offset: 0 00:35:10.838 NGUID/EUI64 Never Reused: No 00:35:10.838 ANA group ID: 1 00:35:10.838 Namespace Write Protected: No 00:35:10.838 Number of LBA Formats: 1 00:35:10.838 Current LBA Format: LBA Format #00 00:35:10.838 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:10.838 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:10.838 rmmod nvme_tcp 00:35:10.838 rmmod nvme_fabrics 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.838 08:02:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.367 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:13.367 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:13.367 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:13.367 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:13.368 08:02:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:13.939 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:14.198 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:14.198 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:15.136 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:15.136 00:35:15.136 real 0m9.222s 00:35:15.136 user 0m1.930s 00:35:15.136 sys 0m3.239s 00:35:15.136 08:02:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:15.136 08:02:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.136 ************************************ 00:35:15.136 END TEST nvmf_identify_kernel_target 00:35:15.136 ************************************ 00:35:15.136 08:02:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:15.136 08:02:06 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:15.136 08:02:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:15.136 08:02:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:15.136 08:02:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.394 ************************************ 00:35:15.394 START TEST nvmf_auth_host 00:35:15.394 ************************************ 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:15.394 * Looking for test storage... 00:35:15.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:15.394 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.298 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:17.298 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:17.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:17.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:17.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:17.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:35:17.299 00:35:17.299 --- 10.0.0.2 ping statistics --- 00:35:17.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.299 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:35:17.299 00:35:17.299 --- 10.0.0.1 ping statistics --- 00:35:17.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.299 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1230102 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1230102 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1230102 ']' 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:17.299 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:18.673 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0c1de033a6c6f095342ab3a1507f9ac 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gO7 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0c1de033a6c6f095342ab3a1507f9ac 0 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0c1de033a6c6f095342ab3a1507f9ac 0 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0c1de033a6c6f095342ab3a1507f9ac 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gO7 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gO7 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gO7 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=073507f01f9bce376eb479967fddfcf08928e1a826762699f589be8a0b57342b 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JAx 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 073507f01f9bce376eb479967fddfcf08928e1a826762699f589be8a0b57342b 3 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 073507f01f9bce376eb479967fddfcf08928e1a826762699f589be8a0b57342b 3 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=073507f01f9bce376eb479967fddfcf08928e1a826762699f589be8a0b57342b 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JAx 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JAx 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.JAx 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=59af3a1fb199c1cbe010333d914d4fefa3cdad508e38c54d 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yxR 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 59af3a1fb199c1cbe010333d914d4fefa3cdad508e38c54d 0 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 59af3a1fb199c1cbe010333d914d4fefa3cdad508e38c54d 0 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=59af3a1fb199c1cbe010333d914d4fefa3cdad508e38c54d 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yxR 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yxR 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yxR 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ba0d49cacf6a8916be31e5c8401442273b5198dc8ddb19c 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Z4s 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ba0d49cacf6a8916be31e5c8401442273b5198dc8ddb19c 2 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ba0d49cacf6a8916be31e5c8401442273b5198dc8ddb19c 2 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ba0d49cacf6a8916be31e5c8401442273b5198dc8ddb19c 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Z4s 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Z4s 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Z4s 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=27c21799716f594f3e8f35bec6c7d05b 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cOV 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 27c21799716f594f3e8f35bec6c7d05b 1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 27c21799716f594f3e8f35bec6c7d05b 1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=27c21799716f594f3e8f35bec6c7d05b 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cOV 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cOV 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cOV 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac60d1becb99fc6d75940b4fdeb8a0cf 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9xG 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac60d1becb99fc6d75940b4fdeb8a0cf 1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac60d1becb99fc6d75940b4fdeb8a0cf 1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac60d1becb99fc6d75940b4fdeb8a0cf 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9xG 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9xG 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9xG 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.674 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b817699620d00bff58fe43963f80de7ba113de3f866e50a 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hwX 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b817699620d00bff58fe43963f80de7ba113de3f866e50a 2 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b817699620d00bff58fe43963f80de7ba113de3f866e50a 2 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b817699620d00bff58fe43963f80de7ba113de3f866e50a 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hwX 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hwX 00:35:18.675 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hwX 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c0023182fe7e255ba4f650a14492fbed 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GOB 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c0023182fe7e255ba4f650a14492fbed 0 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c0023182fe7e255ba4f650a14492fbed 0 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c0023182fe7e255ba4f650a14492fbed 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GOB 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GOB 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GOB 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=81f29eca8db9e9cdca46526ccdca375c15c83450b7e258ede0365e2585861736 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eZc 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 81f29eca8db9e9cdca46526ccdca375c15c83450b7e258ede0365e2585861736 3 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 81f29eca8db9e9cdca46526ccdca375c15c83450b7e258ede0365e2585861736 3 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=81f29eca8db9e9cdca46526ccdca375c15c83450b7e258ede0365e2585861736 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.933 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eZc 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eZc 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eZc 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1230102 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1230102 ']' 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.933 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:18.934 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.934 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:18.934 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gO7 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.JAx ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JAx 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yxR 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Z4s ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Z4s 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cOV 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9xG ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9xG 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hwX 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GOB ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GOB 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eZc 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:19.192 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.563 Waiting for block devices as requested 00:35:20.563 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:20.563 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.563 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.563 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.822 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.822 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.822 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.822 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:21.081 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.081 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:21.081 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:21.339 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:21.339 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:21.339 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:21.339 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:21.596 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:21.596 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:22.162 No valid GPT data, bailing 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:22.162 00:35:22.162 Discovery Log Number of Records 2, Generation counter 2 00:35:22.162 =====Discovery Log Entry 0====== 00:35:22.162 trtype: tcp 00:35:22.162 adrfam: ipv4 00:35:22.162 subtype: current discovery subsystem 00:35:22.162 treq: not specified, sq flow control disable supported 00:35:22.162 portid: 1 00:35:22.162 trsvcid: 4420 00:35:22.162 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:22.162 traddr: 10.0.0.1 00:35:22.162 eflags: none 00:35:22.162 sectype: none 00:35:22.162 =====Discovery Log Entry 1====== 00:35:22.162 trtype: tcp 00:35:22.162 adrfam: ipv4 00:35:22.162 subtype: nvme subsystem 00:35:22.162 treq: not specified, sq flow control disable supported 00:35:22.162 portid: 1 00:35:22.162 trsvcid: 4420 00:35:22.162 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:22.162 traddr: 10.0.0.1 00:35:22.162 eflags: none 00:35:22.162 sectype: none 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:22.162 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.163 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.421 nvme0n1 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.421 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.680 nvme0n1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.680 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.938 nvme0n1 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.938 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.938 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.939 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.939 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.939 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.196 nvme0n1 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:23.196 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.197 nvme0n1 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.197 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.455 nvme0n1 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.455 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.715 nvme0n1 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.715 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.974 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.975 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.975 nvme0n1 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.975 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.233 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.234 nvme0n1 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.234 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.491 nvme0n1 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.491 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.750 nvme0n1 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.750 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.009 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.010 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.010 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.268 nvme0n1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.268 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.527 nvme0n1 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.527 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.786 nvme0n1 00:35:25.786 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.786 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.786 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.786 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.786 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.044 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.044 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.044 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.044 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.044 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.045 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.305 nvme0n1 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.305 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.306 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.565 nvme0n1 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.565 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.822 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.440 nvme0n1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.440 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.009 nvme0n1 00:35:28.009 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.009 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.009 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.009 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.009 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.009 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.009 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.579 nvme0n1 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.579 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.146 nvme0n1 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.146 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.147 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.715 nvme0n1 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.715 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.648 nvme0n1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.648 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.579 nvme0n1 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.580 08:02:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.954 nvme0n1 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.954 08:02:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.892 nvme0n1 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.892 08:02:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.829 nvme0n1 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.829 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.830 nvme0n1 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.830 08:02:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.830 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.090 nvme0n1 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.090 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.091 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.351 nvme0n1 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.351 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.610 nvme0n1 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.610 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.870 nvme0n1 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:35.870 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.871 08:02:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.129 nvme0n1 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.129 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.130 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.130 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.387 nvme0n1 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.387 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.388 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.646 nvme0n1 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.646 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.903 nvme0n1 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.903 08:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.903 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.904 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.162 nvme0n1 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.162 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.422 nvme0n1 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.422 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.682 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.940 nvme0n1 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.940 08:02:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:37.940 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.941 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.199 nvme0n1 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.199 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.456 nvme0n1 00:35:38.456 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.457 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.457 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.457 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.457 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.714 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.715 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.975 nvme0n1 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.975 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.544 nvme0n1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.544 08:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.112 nvme0n1 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.112 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.370 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.938 nvme0n1 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.938 08:02:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.540 nvme0n1 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.540 08:02:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.108 nvme0n1 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.108 08:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.046 nvme0n1 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.046 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.047 08:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.983 nvme0n1 00:35:43.983 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.983 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.983 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.983 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.983 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.242 08:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.174 nvme0n1 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.174 08:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.109 nvme0n1 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.109 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.367 08:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.301 nvme0n1 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.301 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.302 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.559 nvme0n1 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.559 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.560 nvme0n1 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.560 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.817 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.818 08:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.818 08:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.818 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.818 08:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 nvme0n1 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.818 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.077 nvme0n1 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.077 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.078 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.336 nvme0n1 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.336 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.337 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.596 nvme0n1 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.596 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.597 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.855 nvme0n1 00:35:48.855 08:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.855 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.855 08:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.855 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.855 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.855 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.855 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.856 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.115 nvme0n1 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.115 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.116 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.375 nvme0n1 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.375 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.634 nvme0n1 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.634 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.893 08:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.153 nvme0n1 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.153 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.154 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.414 nvme0n1 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.414 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.415 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.415 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.415 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.415 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.415 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.672 nvme0n1 00:35:50.672 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.672 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.672 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.672 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.672 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.673 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.931 08:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.190 nvme0n1 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.190 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.450 nvme0n1 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.450 08:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.017 nvme0n1 00:35:52.017 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.017 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.017 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.017 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.017 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.017 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.276 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.845 nvme0n1 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.845 08:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.414 nvme0n1 00:35:53.414 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.414 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.414 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.415 08:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 nvme0n1 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.547 nvme0n1 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBjMWRlMDMzYTZjNmYwOTUzNDJhYjNhMTUwN2Y5YWP4tPKQ: 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczNTA3ZjAxZjliY2UzNzZlYjQ3OTk2N2ZkZGZjZjA4OTI4ZTFhODI2NzYyNjk5ZjU4OWJlOGEwYjU3MzQyYiNBkcI=: 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.547 08:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.517 nvme0n1 00:35:55.517 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.517 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.517 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.517 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.517 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.776 08:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.709 nvme0n1 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdjMjE3OTk3MTZmNTk0ZjNlOGYzNWJlYzZjN2QwNWJMaASB: 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: ]] 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWM2MGQxYmVjYjk5ZmM2ZDc1OTQwYjRmZGViOGEwY2YtgHPE: 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.709 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.710 08:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.088 nvme0n1 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmI4MTc2OTk2MjBkMDBiZmY1OGZlNDM5NjNmODBkZTdiYTExM2RlM2Y4NjZlNTBhk4JEBA==: 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzAwMjMxODJmZTdlMjU1YmE0ZjY1MGExNDQ5MmZiZWSxqoWM: 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.088 08:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.023 nvme0n1 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFmMjllY2E4ZGI5ZTljZGNhNDY1MjZjY2RjYTM3NWMxNWM4MzQ1MGI3ZTI1OGVkZTAzNjVlMjU4NTg2MTczNlsUYDw=: 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.023 08:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.959 nvme0n1 00:35:59.959 08:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.959 08:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.959 08:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.959 08:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.959 08:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTlhZjNhMWZiMTk5YzFjYmUwMTAzMzNkOTE0ZDRmZWZhM2NkYWQ1MDhlMzhjNTRkVNZtwA==: 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJhMGQ0OWNhY2Y2YTg5MTZiZTMxZTVjODQwMTQ0MjI3M2I1MTk4ZGM4ZGRiMTljznEylA==: 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.959 request: 00:35:59.959 { 00:35:59.959 "name": "nvme0", 00:35:59.959 "trtype": "tcp", 00:35:59.959 "traddr": "10.0.0.1", 00:35:59.959 "adrfam": "ipv4", 00:35:59.959 "trsvcid": "4420", 00:35:59.959 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.959 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.959 "prchk_reftag": false, 00:35:59.959 "prchk_guard": false, 00:35:59.959 "hdgst": false, 00:35:59.959 "ddgst": false, 00:35:59.959 "method": "bdev_nvme_attach_controller", 00:35:59.959 "req_id": 1 00:35:59.959 } 00:35:59.959 Got JSON-RPC error response 00:35:59.959 response: 00:35:59.959 { 00:35:59.959 "code": -5, 00:35:59.959 "message": "Input/output error" 00:35:59.959 } 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.959 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.217 request: 00:36:00.217 { 00:36:00.217 "name": "nvme0", 00:36:00.217 "trtype": "tcp", 00:36:00.217 "traddr": "10.0.0.1", 00:36:00.217 "adrfam": "ipv4", 00:36:00.217 "trsvcid": "4420", 00:36:00.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:00.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:00.217 "prchk_reftag": false, 00:36:00.217 "prchk_guard": false, 00:36:00.217 "hdgst": false, 00:36:00.217 "ddgst": false, 00:36:00.217 "dhchap_key": "key2", 00:36:00.217 "method": "bdev_nvme_attach_controller", 00:36:00.217 "req_id": 1 00:36:00.217 } 00:36:00.217 Got JSON-RPC error response 00:36:00.217 response: 00:36:00.217 { 00:36:00.217 "code": -5, 00:36:00.217 "message": "Input/output error" 00:36:00.217 } 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.217 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.217 request: 00:36:00.217 { 00:36:00.217 "name": "nvme0", 00:36:00.217 "trtype": "tcp", 00:36:00.217 "traddr": "10.0.0.1", 00:36:00.217 "adrfam": "ipv4", 00:36:00.217 "trsvcid": "4420", 00:36:00.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:00.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:00.217 "prchk_reftag": false, 00:36:00.217 "prchk_guard": false, 00:36:00.217 "hdgst": false, 00:36:00.217 "ddgst": false, 00:36:00.218 "dhchap_key": "key1", 00:36:00.218 "dhchap_ctrlr_key": "ckey2", 00:36:00.218 "method": "bdev_nvme_attach_controller", 00:36:00.218 "req_id": 1 00:36:00.218 } 00:36:00.218 Got JSON-RPC error response 00:36:00.218 response: 00:36:00.218 { 00:36:00.218 "code": -5, 00:36:00.218 "message": "Input/output error" 00:36:00.218 } 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:00.218 rmmod nvme_tcp 00:36:00.218 rmmod nvme_fabrics 00:36:00.218 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1230102 ']' 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1230102 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1230102 ']' 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1230102 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1230102 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1230102' 00:36:00.477 killing process with pid 1230102 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1230102 00:36:00.477 08:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1230102 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:01.413 08:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:03.945 08:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:04.879 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.879 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.879 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:05.816 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:05.816 08:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gO7 /tmp/spdk.key-null.yxR /tmp/spdk.key-sha256.cOV /tmp/spdk.key-sha384.hwX /tmp/spdk.key-sha512.eZc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:05.816 08:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:07.191 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:07.191 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:07.191 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:07.191 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:07.191 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:07.191 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:07.191 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:07.191 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:07.191 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:07.191 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:07.191 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:07.191 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:07.191 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:07.191 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:07.191 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:07.191 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:07.191 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:07.191 00:36:07.191 real 0m51.902s 00:36:07.191 user 0m49.685s 00:36:07.191 sys 0m6.069s 00:36:07.191 08:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:07.191 08:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 ************************************ 00:36:07.191 END TEST nvmf_auth_host 00:36:07.191 ************************************ 00:36:07.191 08:02:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:07.191 08:02:58 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:36:07.191 08:02:58 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:07.191 08:02:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:07.191 08:02:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:07.191 08:02:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 ************************************ 00:36:07.191 START TEST nvmf_digest 00:36:07.191 ************************************ 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:07.191 * Looking for test storage... 00:36:07.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:07.191 08:02:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:09.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:09.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:09.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:09.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:09.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:09.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:36:09.720 00:36:09.720 --- 10.0.0.2 ping statistics --- 00:36:09.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.720 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:36:09.720 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:09.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:09.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:36:09.720 00:36:09.721 --- 10.0.0.1 ping statistics --- 00:36:09.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.721 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.721 ************************************ 00:36:09.721 START TEST nvmf_digest_clean 00:36:09.721 ************************************ 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1239942 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1239942 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1239942 ']' 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:09.721 08:03:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.721 [2024-07-15 08:03:00.603060] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:09.721 [2024-07-15 08:03:00.603207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.721 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.721 [2024-07-15 08:03:00.743657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.980 [2024-07-15 08:03:01.008199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.980 [2024-07-15 08:03:01.008287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.980 [2024-07-15 08:03:01.008316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.980 [2024-07-15 08:03:01.008341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.980 [2024-07-15 08:03:01.008364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.980 [2024-07-15 08:03:01.008412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.546 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.803 null0 00:36:10.803 [2024-07-15 08:03:01.866637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.803 [2024-07-15 08:03:01.890886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1240170 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1240170 /var/tmp/bperf.sock 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1240170 ']' 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:10.803 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.803 [2024-07-15 08:03:01.971360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:10.803 [2024-07-15 08:03:01.971516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240170 ] 00:36:11.062 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.062 [2024-07-15 08:03:02.097631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.340 [2024-07-15 08:03:02.329730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.919 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.919 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:11.919 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:11.919 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:11.919 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:12.485 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.485 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.743 nvme0n1 00:36:12.743 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:12.743 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.000 Running I/O for 2 seconds... 00:36:14.903 00:36:14.903 Latency(us) 00:36:14.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:14.903 nvme0n1 : 2.00 13707.26 53.54 0.00 0.00 9323.70 4975.88 22233.69 00:36:14.903 =================================================================================================================== 00:36:14.903 Total : 13707.26 53.54 0.00 0.00 9323.70 4975.88 22233.69 00:36:14.903 0 00:36:14.903 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:14.903 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:14.903 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:14.903 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:14.903 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:14.903 | select(.opcode=="crc32c") 00:36:14.903 | "\(.module_name) \(.executed)"' 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1240170 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1240170 ']' 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1240170 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1240170 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1240170' 00:36:15.161 killing process with pid 1240170 00:36:15.161 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1240170 00:36:15.161 Received shutdown signal, test time was about 2.000000 seconds 00:36:15.161 00:36:15.161 Latency(us) 00:36:15.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.161 =================================================================================================================== 00:36:15.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:15.162 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1240170 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1240891 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1240891 /var/tmp/bperf.sock 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1240891 ']' 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:16.535 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:16.536 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:16.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:16.536 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:16.536 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:16.536 [2024-07-15 08:03:07.438855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:16.536 [2024-07-15 08:03:07.439048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240891 ] 00:36:16.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:16.536 Zero copy mechanism will not be used. 00:36:16.536 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.536 [2024-07-15 08:03:07.574543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.794 [2024-07-15 08:03:07.828375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.380 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:17.380 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:17.380 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:17.380 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:17.380 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.945 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.945 08:03:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:18.202 nvme0n1 00:36:18.202 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:18.202 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:18.460 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:18.460 Zero copy mechanism will not be used. 00:36:18.460 Running I/O for 2 seconds... 00:36:20.359 00:36:20.359 Latency(us) 00:36:20.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.359 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:20.359 nvme0n1 : 2.00 3278.07 409.76 0.00 0.00 4873.89 4587.52 7670.14 00:36:20.359 =================================================================================================================== 00:36:20.359 Total : 3278.07 409.76 0.00 0.00 4873.89 4587.52 7670.14 00:36:20.359 0 00:36:20.359 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:20.359 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:20.359 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:20.359 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:20.359 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:20.359 | select(.opcode=="crc32c") 00:36:20.359 | "\(.module_name) \(.executed)"' 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1240891 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1240891 ']' 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1240891 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1240891 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1240891' 00:36:20.619 killing process with pid 1240891 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1240891 00:36:20.619 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.619 00:36:20.619 Latency(us) 00:36:20.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.619 =================================================================================================================== 00:36:20.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.619 08:03:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1240891 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1241907 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1241907 /var/tmp/bperf.sock 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1241907 ']' 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:21.996 08:03:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.996 [2024-07-15 08:03:12.931801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:21.996 [2024-07-15 08:03:12.931986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241907 ] 00:36:21.996 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.996 [2024-07-15 08:03:13.072738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.255 [2024-07-15 08:03:13.303139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.819 08:03:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.819 08:03:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:22.819 08:03:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:22.819 08:03:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:22.819 08:03:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.386 08:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.386 08:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.643 nvme0n1 00:36:23.643 08:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:23.643 08:03:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.902 Running I/O for 2 seconds... 00:36:25.801 00:36:25.801 Latency(us) 00:36:25.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.801 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:25.801 nvme0n1 : 2.01 14391.16 56.22 0.00 0.00 8868.60 5121.52 14369.37 00:36:25.801 =================================================================================================================== 00:36:25.801 Total : 14391.16 56.22 0.00 0.00 8868.60 5121.52 14369.37 00:36:25.801 0 00:36:25.801 08:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:25.801 08:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:25.801 08:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:25.801 08:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:25.801 08:03:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:25.801 | select(.opcode=="crc32c") 00:36:25.801 | "\(.module_name) \(.executed)"' 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1241907 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1241907 ']' 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1241907 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1241907 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1241907' 00:36:26.059 killing process with pid 1241907 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1241907 00:36:26.059 Received shutdown signal, test time was about 2.000000 seconds 00:36:26.059 00:36:26.059 Latency(us) 00:36:26.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.059 =================================================================================================================== 00:36:26.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.059 08:03:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1241907 00:36:27.011 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1242552 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1242552 /var/tmp/bperf.sock 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1242552 ']' 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:27.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:27.293 08:03:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:27.293 [2024-07-15 08:03:18.311802] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:27.293 [2024-07-15 08:03:18.311963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242552 ] 00:36:27.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:27.293 Zero copy mechanism will not be used. 00:36:27.293 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.293 [2024-07-15 08:03:18.439680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.552 [2024-07-15 08:03:18.666533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.117 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:28.118 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:28.118 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:28.118 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:28.118 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:28.686 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.686 08:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:29.251 nvme0n1 00:36:29.251 08:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:29.251 08:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:29.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:29.251 Zero copy mechanism will not be used. 00:36:29.251 Running I/O for 2 seconds... 00:36:31.783 00:36:31.783 Latency(us) 00:36:31.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.783 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:31.783 nvme0n1 : 2.00 3779.15 472.39 0.00 0.00 4222.04 3349.62 10631.40 00:36:31.783 =================================================================================================================== 00:36:31.783 Total : 3779.15 472.39 0.00 0.00 4222.04 3349.62 10631.40 00:36:31.783 0 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:31.783 | select(.opcode=="crc32c") 00:36:31.783 | "\(.module_name) \(.executed)"' 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1242552 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1242552 ']' 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1242552 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1242552 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1242552' 00:36:31.783 killing process with pid 1242552 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1242552 00:36:31.783 Received shutdown signal, test time was about 2.000000 seconds 00:36:31.783 00:36:31.783 Latency(us) 00:36:31.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.783 =================================================================================================================== 00:36:31.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:31.783 08:03:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1242552 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1239942 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1239942 ']' 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1239942 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239942 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239942' 00:36:32.721 killing process with pid 1239942 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1239942 00:36:32.721 08:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1239942 00:36:34.100 00:36:34.100 real 0m24.513s 00:36:34.100 user 0m47.770s 00:36:34.100 sys 0m4.662s 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.100 ************************************ 00:36:34.100 END TEST nvmf_digest_clean 00:36:34.100 ************************************ 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.100 ************************************ 00:36:34.100 START TEST nvmf_digest_error 00:36:34.100 ************************************ 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1243377 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1243377 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1243377 ']' 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:34.100 08:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.100 [2024-07-15 08:03:25.160642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:34.100 [2024-07-15 08:03:25.160790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.100 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.100 [2024-07-15 08:03:25.295255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.359 [2024-07-15 08:03:25.519796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.359 [2024-07-15 08:03:25.519872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.359 [2024-07-15 08:03:25.519913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.359 [2024-07-15 08:03:25.519940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.359 [2024-07-15 08:03:25.519962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.359 [2024-07-15 08:03:25.520012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.925 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.925 [2024-07-15 08:03:26.150488] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:35.183 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.183 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:35.183 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:35.183 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.183 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.442 null0 00:36:35.442 [2024-07-15 08:03:26.531941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.442 [2024-07-15 08:03:26.556255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1243598 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1243598 /var/tmp/bperf.sock 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1243598 ']' 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:35.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:35.442 08:03:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.442 [2024-07-15 08:03:26.642854] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:35.442 [2024-07-15 08:03:26.643024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243598 ] 00:36:35.700 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.700 [2024-07-15 08:03:26.776144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.958 [2024-07-15 08:03:27.025411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.523 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:36.523 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:36.523 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:36.523 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:36.781 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:36.781 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.781 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.781 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.781 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.781 08:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:37.038 nvme0n1 00:36:37.038 08:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:37.038 08:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.038 08:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.038 08:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.038 08:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:37.038 08:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:37.296 Running I/O for 2 seconds... 00:36:37.296 [2024-07-15 08:03:28.373664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.373730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.373787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.296 [2024-07-15 08:03:28.391337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.391386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.391415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.296 [2024-07-15 08:03:28.411051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.411092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.411116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.296 [2024-07-15 08:03:28.429109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.429151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.429192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.296 [2024-07-15 08:03:28.445790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.445842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.445873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.296 [2024-07-15 08:03:28.465001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.465042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.465067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.296 [2024-07-15 08:03:28.479247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.296 [2024-07-15 08:03:28.479303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.296 [2024-07-15 08:03:28.479334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.297 [2024-07-15 08:03:28.498432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.297 [2024-07-15 08:03:28.498480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.297 [2024-07-15 08:03:28.498510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.297 [2024-07-15 08:03:28.519041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.297 [2024-07-15 08:03:28.519080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.297 [2024-07-15 08:03:28.519105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.534372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.534429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.534460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.554058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.554098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.554123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.569764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.569827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.569857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.588544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.588588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.588614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.607373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.607414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.607438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.622655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.622702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.622731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.642538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.642586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.665031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.665078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.665105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.684797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.554 [2024-07-15 08:03:28.684838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.554 [2024-07-15 08:03:28.684894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.554 [2024-07-15 08:03:28.700568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.555 [2024-07-15 08:03:28.700615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.555 [2024-07-15 08:03:28.700664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.555 [2024-07-15 08:03:28.718089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.555 [2024-07-15 08:03:28.718129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.555 [2024-07-15 08:03:28.718152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.555 [2024-07-15 08:03:28.734554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.555 [2024-07-15 08:03:28.734602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.555 [2024-07-15 08:03:28.734631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.555 [2024-07-15 08:03:28.750750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.555 [2024-07-15 08:03:28.750816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.555 [2024-07-15 08:03:28.750843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.555 [2024-07-15 08:03:28.769050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.555 [2024-07-15 08:03:28.769102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.555 [2024-07-15 08:03:28.769129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.787018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.787062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.787088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.805390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.805439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.805469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.826643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.826700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.826730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.842783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.842847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.842891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.863097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.863151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.863181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.882221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.882268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.882296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.898253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.898323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.918133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.918202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.918230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.933545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.933593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.933623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.950270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.950324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.950349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.970208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.970261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.970291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:28.985370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:28.985418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:28.985455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:29.004122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:29.004178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:29.004204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:29.023608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:29.023666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:29.023693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.813 [2024-07-15 08:03:29.038438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.813 [2024-07-15 08:03:29.038485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.813 [2024-07-15 08:03:29.038514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.070 [2024-07-15 08:03:29.059679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.070 [2024-07-15 08:03:29.059735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.059760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.075036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.075095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.075122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.092650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.092707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.092737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.107053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.107108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.107133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.128177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.128232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.128259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.146859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.146943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.146969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.162840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.162900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.162945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.180051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.180107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.180133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.199838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.199900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.199928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.215525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.215580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.215606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.234242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.234299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.234327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.248892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.248951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.248976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.265789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.265836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.265865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.071 [2024-07-15 08:03:29.284950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.071 [2024-07-15 08:03:29.285004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.071 [2024-07-15 08:03:29.285037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.305162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.328 [2024-07-15 08:03:29.305211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.328 [2024-07-15 08:03:29.305241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.320404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.328 [2024-07-15 08:03:29.320452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.328 [2024-07-15 08:03:29.320481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.341975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.328 [2024-07-15 08:03:29.342031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.328 [2024-07-15 08:03:29.342057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.355984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.328 [2024-07-15 08:03:29.356036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.328 [2024-07-15 08:03:29.356061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.376190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.328 [2024-07-15 08:03:29.376246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.328 [2024-07-15 08:03:29.376272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.396469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.328 [2024-07-15 08:03:29.396524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.328 [2024-07-15 08:03:29.396551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.328 [2024-07-15 08:03:29.412689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.412737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.412766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.430974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.431029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.431054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.447844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.447932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.447963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.464452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.464499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.464528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.481962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.482016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.482043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.502023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.502078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.502117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.517700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.517747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.517777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.329 [2024-07-15 08:03:29.538223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.329 [2024-07-15 08:03:29.538278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.329 [2024-07-15 08:03:29.538303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.561415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.561472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.561500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.580239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.580297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.580328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.596544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.596592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.596622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.616836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.616904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.616937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.636422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.636480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.636511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.651059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.651154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.671366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.671423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.671453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.687026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.687083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.687110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.587 [2024-07-15 08:03:29.704628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.587 [2024-07-15 08:03:29.704677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.587 [2024-07-15 08:03:29.704707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.588 [2024-07-15 08:03:29.722799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.588 [2024-07-15 08:03:29.722855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.588 [2024-07-15 08:03:29.722890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.588 [2024-07-15 08:03:29.742531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.588 [2024-07-15 08:03:29.742591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.588 [2024-07-15 08:03:29.742625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.588 [2024-07-15 08:03:29.757665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.588 [2024-07-15 08:03:29.757728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.588 [2024-07-15 08:03:29.757756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.588 [2024-07-15 08:03:29.777993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.588 [2024-07-15 08:03:29.778037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.588 [2024-07-15 08:03:29.778064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.588 [2024-07-15 08:03:29.793293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.588 [2024-07-15 08:03:29.793357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.588 [2024-07-15 08:03:29.793387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.588 [2024-07-15 08:03:29.812613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.588 [2024-07-15 08:03:29.812663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.588 [2024-07-15 08:03:29.812692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.847 [2024-07-15 08:03:29.830500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.847 [2024-07-15 08:03:29.830549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.830577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.847684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.847737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.847762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.866345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.866400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.866432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.883410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.883458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.883487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.902639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.902687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.902716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.922528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.922577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.922606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.945532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.945589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.945615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.965649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.965706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.965732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.980991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.981046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.981071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:29.999663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:29.999722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:29.999752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:30.019931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:30.019996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:30.020028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:30.037195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:30.037292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:30.037324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.848 [2024-07-15 08:03:30.060744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.848 [2024-07-15 08:03:30.060817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.848 [2024-07-15 08:03:30.060853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.082012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.082090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.082119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.097889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.097952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.097999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.120699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.120749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.120796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.141517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.141576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.141602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.159446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.159504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.159532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.174092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.174152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.174181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.194959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.195014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.195040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.209075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.209148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.209174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.230907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.230967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.230992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.248627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.248683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.248708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.264531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.264579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.264608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.280136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.280191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.280217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.300034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.300088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.300113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.317709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.317761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.317792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.108 [2024-07-15 08:03:30.331661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.108 [2024-07-15 08:03:30.331709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.108 [2024-07-15 08:03:30.331739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.366 [2024-07-15 08:03:30.354069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.366 [2024-07-15 08:03:30.354129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.366 [2024-07-15 08:03:30.354170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.366 00:36:39.366 Latency(us) 00:36:39.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.366 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:39.366 nvme0n1 : 2.01 14037.11 54.83 0.00 0.00 9103.33 4660.34 31068.92 00:36:39.366 =================================================================================================================== 00:36:39.366 Total : 14037.11 54.83 0.00 0.00 9103.33 4660.34 31068.92 00:36:39.366 0 00:36:39.366 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:39.366 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:39.366 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:39.366 | .driver_specific 00:36:39.366 | .nvme_error 00:36:39.366 | .status_code 00:36:39.366 | .command_transient_transport_error' 00:36:39.366 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1243598 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1243598 ']' 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1243598 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243598 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243598' 00:36:39.624 killing process with pid 1243598 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1243598 00:36:39.624 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.624 00:36:39.624 Latency(us) 00:36:39.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.624 =================================================================================================================== 00:36:39.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.624 08:03:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1243598 00:36:40.556 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:40.556 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:40.556 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1244189 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1244189 /var/tmp/bperf.sock 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1244189 ']' 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:40.557 08:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.815 [2024-07-15 08:03:31.790775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:40.815 [2024-07-15 08:03:31.790943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244189 ] 00:36:40.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:40.815 Zero copy mechanism will not be used. 00:36:40.815 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.815 [2024-07-15 08:03:31.922836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.073 [2024-07-15 08:03:32.174611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.670 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:41.670 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:41.670 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.670 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.927 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:41.927 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.927 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.927 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.927 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.927 08:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.185 nvme0n1 00:36:42.185 08:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:42.185 08:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.185 08:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.185 08:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.185 08:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:42.185 08:03:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:42.185 Zero copy mechanism will not be used. 00:36:42.185 Running I/O for 2 seconds... 00:36:42.444 [2024-07-15 08:03:33.414659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.414745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.414779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.425187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.425240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.425271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.435485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.435545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.435575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.446077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.446122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.446149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.457439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.457492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.457521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.468487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.468541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.468570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.479361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.479413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.479442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.490555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.490606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.490636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.502127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.502187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.502213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.513027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.513075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.513101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.523345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.523397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.523452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.533827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.533901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.533945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.544262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.544315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.544344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.554269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.554320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.554349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.564564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.564614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.564644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.575716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.575768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.575798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.586995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.587044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.587070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.597883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.597946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.597973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.607811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.607862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.607902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.617962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.618020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.618047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.628140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.628199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.444 [2024-07-15 08:03:33.628224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.444 [2024-07-15 08:03:33.638930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.444 [2024-07-15 08:03:33.638975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.445 [2024-07-15 08:03:33.639002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.445 [2024-07-15 08:03:33.648844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.445 [2024-07-15 08:03:33.648904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.445 [2024-07-15 08:03:33.648949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.445 [2024-07-15 08:03:33.658719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.445 [2024-07-15 08:03:33.658768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.445 [2024-07-15 08:03:33.658798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.445 [2024-07-15 08:03:33.668813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.445 [2024-07-15 08:03:33.668862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.445 [2024-07-15 08:03:33.668901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.679395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.679449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.679478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.689458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.689510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.689539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.699580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.699630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.699691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.709176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.709237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.709267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.719782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.719833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.719862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.730005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.730076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.739946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.739991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.740018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.745738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.745785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.745814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.704 [2024-07-15 08:03:33.755811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.704 [2024-07-15 08:03:33.755861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.704 [2024-07-15 08:03:33.755901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.766069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.766115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.766141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.775986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.776030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.776055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.786074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.786130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.786158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.795820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.795869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.795909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.805855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.805935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.805962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.816119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.816181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.816211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.826632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.826682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.826712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.837700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.837751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.837781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.848413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.848493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.859024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.859068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.859094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.869236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.869288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.869318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.879049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.879091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.879117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.888931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.888977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.889002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.899026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.899072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.899099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.908942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.908988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.909015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.918896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.918956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.918983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.705 [2024-07-15 08:03:33.929467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.705 [2024-07-15 08:03:33.929520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.705 [2024-07-15 08:03:33.929550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.963 [2024-07-15 08:03:33.939746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.963 [2024-07-15 08:03:33.939797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.963 [2024-07-15 08:03:33.939827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.963 [2024-07-15 08:03:33.949640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.963 [2024-07-15 08:03:33.949688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.963 [2024-07-15 08:03:33.949717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.963 [2024-07-15 08:03:33.959724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.963 [2024-07-15 08:03:33.959786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.963 [2024-07-15 08:03:33.959815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.963 [2024-07-15 08:03:33.969830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:33.969887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:33.969918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:33.979576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:33.979625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:33.979654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:33.989574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:33.989623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:33.989652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:33.999167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:33.999211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:33.999255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.009070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.009114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.009139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.019325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.019373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.019403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.029294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.029342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.029372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.039389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.039438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.039468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.049479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.049528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.049557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.059427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.059476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.059505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.069455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.069504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.069534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.079317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.079367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.079396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.089422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.089470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.089499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.099393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.099443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.099473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.109227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.109289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.109319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.119152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.119219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.129119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.129178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.129216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.139154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.139198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.139223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.149131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.149173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.149198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.159427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.159478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.159507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.169426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.169475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.169504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.180066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.180112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.180148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.964 [2024-07-15 08:03:34.190201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.964 [2024-07-15 08:03:34.190259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.964 [2024-07-15 08:03:34.190304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.200287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.200337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.200367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.210026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.210068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.210093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.220027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.220071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.220098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.229770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.229819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.229847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.239856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.239928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.239955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.249960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.250003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.250028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.260043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.260088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.260114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.269850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.269906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.269948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.279837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.279897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.279943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.290046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.290091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.290118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.299973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.300016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.300052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.310176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.310236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.310266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.320303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.320353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.320382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.330225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.330287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.330317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.340595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.340643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.340672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.350518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.350567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.350596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.360688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.360737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.360766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.370647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.370697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.370726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.380769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.380818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.380847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.390663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.390712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.390773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.400362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.400410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.400439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.410365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.410414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.410443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.420184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.420246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.420275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.430405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.430457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.430486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.222 [2024-07-15 08:03:34.440700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.222 [2024-07-15 08:03:34.440751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.222 [2024-07-15 08:03:34.440780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.481 [2024-07-15 08:03:34.450964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.481 [2024-07-15 08:03:34.451009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.481 [2024-07-15 08:03:34.451036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.481 [2024-07-15 08:03:34.461256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.481 [2024-07-15 08:03:34.461306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.481 [2024-07-15 08:03:34.461336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.481 [2024-07-15 08:03:34.472637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.472688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.472728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.484185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.484249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.496400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.496450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.496480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.508776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.508831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.508861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.520057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.520103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.520129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.532322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.532374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.532405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.543793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.543845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.543898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.555528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.555578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.567139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.567185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.567230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.579021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.579077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.579104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.591001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.591059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.591085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.597867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.597951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.597978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.610565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.610617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.610647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.622006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.622062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.622089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.633162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.633220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.633251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.645313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.645365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.656872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.656944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.667895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.667958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.667996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.680517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.680570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.680599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.693171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.693237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.693268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.482 [2024-07-15 08:03:34.704245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.482 [2024-07-15 08:03:34.704294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.482 [2024-07-15 08:03:34.704324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.742 [2024-07-15 08:03:34.714802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.742 [2024-07-15 08:03:34.714854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.742 [2024-07-15 08:03:34.714894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.742 [2024-07-15 08:03:34.725392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.742 [2024-07-15 08:03:34.725443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.742 [2024-07-15 08:03:34.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.742 [2024-07-15 08:03:34.735174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.735231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.735262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.745244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.745308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.745337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.755024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.755067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.765054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.765109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.765134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.775404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.775455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.775484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.785269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.785346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.785376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.795413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.795464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.795493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.805337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.805384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.805413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.815423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.815473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.815512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.825152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.825206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.825237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.835173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.835226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.835252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.845103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.845146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.845178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.855130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.855195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.855234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.865274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.865324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.865354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.875404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.875455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.875485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.885547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.885595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.885624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.895638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.895686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.895715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.905483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.905529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.905559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.915297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.915343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.915371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.925028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.925068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.925092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.934974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.935024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.935049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.945283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.945332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.945361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.955228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.955288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.955317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.743 [2024-07-15 08:03:34.965257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.743 [2024-07-15 08:03:34.965303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.743 [2024-07-15 08:03:34.965331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.003 [2024-07-15 08:03:34.975244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.003 [2024-07-15 08:03:34.975292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.003 [2024-07-15 08:03:34.975321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.003 [2024-07-15 08:03:34.985240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.003 [2024-07-15 08:03:34.985287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.003 [2024-07-15 08:03:34.985316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.003 [2024-07-15 08:03:34.995209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.003 [2024-07-15 08:03:34.995262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.003 [2024-07-15 08:03:34.995302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.003 [2024-07-15 08:03:35.004992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.003 [2024-07-15 08:03:35.005032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.003 [2024-07-15 08:03:35.005055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.003 [2024-07-15 08:03:35.015423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.003 [2024-07-15 08:03:35.015471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.003 [2024-07-15 08:03:35.015499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.025354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.025430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.035084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.035124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.035148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.044922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.044962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.044986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.054807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.054855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.054892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.064418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.064465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.064494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.074256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.074303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.074332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.084186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.084228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.084254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.093959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.093998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.103890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.103945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.103977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.113795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.113869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.123758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.123804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.123832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.133633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.133679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.133707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.143456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.143502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.143530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.153479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.153526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.153555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.163579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.163626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.163655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.174002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.174045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.174071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.184179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.184228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.184257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.194167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.194228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.194257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.205129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.205186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.205211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.215198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.215255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.215284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.004 [2024-07-15 08:03:35.225110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.004 [2024-07-15 08:03:35.225149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.004 [2024-07-15 08:03:35.225173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.235091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.235133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.235158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.245088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.245127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.245151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.255234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.255312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.265126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.265166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.265191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.275650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.275700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.275739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.286032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.286098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.296389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.296437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.296466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.306187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.306243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.306282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.317963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.318004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.318028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.329969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.330011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.330037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.342039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.342081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.342106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.354146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.354203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.354244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.366419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.366468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.366497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.378414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.378463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.378491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.390687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.390765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.262 [2024-07-15 08:03:35.402620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.262 [2024-07-15 08:03:35.402669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.262 [2024-07-15 08:03:35.402698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.262 00:36:44.262 Latency(us) 00:36:44.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.262 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:44.262 nvme0n1 : 2.00 2998.21 374.78 0.00 0.00 5329.55 1377.47 13107.20 00:36:44.262 =================================================================================================================== 00:36:44.262 Total : 2998.21 374.78 0.00 0.00 5329.55 1377.47 13107.20 00:36:44.262 0 00:36:44.262 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:44.262 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:44.262 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.262 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:44.262 | .driver_specific 00:36:44.262 | .nvme_error 00:36:44.262 | .status_code 00:36:44.262 | .command_transient_transport_error' 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1244189 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1244189 ']' 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1244189 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.520 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1244189 00:36:44.779 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.779 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.779 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1244189' 00:36:44.779 killing process with pid 1244189 00:36:44.779 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1244189 00:36:44.779 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.779 00:36:44.779 Latency(us) 00:36:44.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.779 =================================================================================================================== 00:36:44.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.779 08:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1244189 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1244806 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1244806 /var/tmp/bperf.sock 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1244806 ']' 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:45.715 08:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.715 [2024-07-15 08:03:36.904892] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:45.715 [2024-07-15 08:03:36.905062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244806 ] 00:36:45.972 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.972 [2024-07-15 08:03:37.039196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.231 [2024-07-15 08:03:37.283864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.796 08:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:46.796 08:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:46.796 08:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.796 08:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:47.054 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:47.054 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.054 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.054 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.054 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.054 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.620 nvme0n1 00:36:47.620 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:47.620 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.620 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.620 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.620 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.620 08:03:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.620 Running I/O for 2 seconds... 00:36:47.620 [2024-07-15 08:03:38.752272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:47.620 [2024-07-15 08:03:38.753497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.753563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:47.620 [2024-07-15 08:03:38.768542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:36:47.620 [2024-07-15 08:03:38.769834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.769898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:47.620 [2024-07-15 08:03:38.786641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:36:47.620 [2024-07-15 08:03:38.788695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:47.620 [2024-07-15 08:03:38.801531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:36:47.620 [2024-07-15 08:03:38.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.803094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.620 [2024-07-15 08:03:38.817834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:47.620 [2024-07-15 08:03:38.819262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.819306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:47.620 [2024-07-15 08:03:38.836345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:36:47.620 [2024-07-15 08:03:38.838791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.838835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:47.620 [2024-07-15 08:03:38.847677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:47.620 [2024-07-15 08:03:38.848718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.620 [2024-07-15 08:03:38.848761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.863066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:36:47.878 [2024-07-15 08:03:38.864039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.864080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.881086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:47.878 [2024-07-15 08:03:38.882337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.882380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.897692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:36:47.878 [2024-07-15 08:03:38.899305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.899349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.912807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:36:47.878 [2024-07-15 08:03:38.914288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.914331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.930628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:36:47.878 [2024-07-15 08:03:38.932290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.932333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.947105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:47.878 [2024-07-15 08:03:38.948912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.948968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.960083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:36:47.878 [2024-07-15 08:03:38.961094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.961132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.977870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:36:47.878 [2024-07-15 08:03:38.979641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.979682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:38.992811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:36:47.878 [2024-07-15 08:03:38.994134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:38.994202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:39.009321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.878 [2024-07-15 08:03:39.010446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:39.010489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:39.028072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:36:47.878 [2024-07-15 08:03:39.030380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:39.030424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:39.043077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:36:47.878 [2024-07-15 08:03:39.044736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:39.044779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:39.057657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:36:47.878 [2024-07-15 08:03:39.060148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:39.060215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:47.878 [2024-07-15 08:03:39.072671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:36:47.878 [2024-07-15 08:03:39.073732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.878 [2024-07-15 08:03:39.073775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:47.879 [2024-07-15 08:03:39.089433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:36:47.879 [2024-07-15 08:03:39.090725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.879 [2024-07-15 08:03:39.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:47.879 [2024-07-15 08:03:39.104638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:36:47.879 [2024-07-15 08:03:39.105898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.879 [2024-07-15 08:03:39.105940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.122528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:48.136 [2024-07-15 08:03:39.124060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.124099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.137344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:36:48.136 [2024-07-15 08:03:39.138804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.138847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.154885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:36:48.136 [2024-07-15 08:03:39.156557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.156600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.171046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:36:48.136 [2024-07-15 08:03:39.172911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.172967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.185856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:36:48.136 [2024-07-15 08:03:39.187700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.187743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.200700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:36:48.136 [2024-07-15 08:03:39.201964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.202002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.216729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:36:48.136 [2024-07-15 08:03:39.217955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.217993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.234772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:36:48.136 [2024-07-15 08:03:39.237083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.237122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.249401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:36:48.136 [2024-07-15 08:03:39.251130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.251169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.263627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:36:48.136 [2024-07-15 08:03:39.266311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.266363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.278299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:36:48.136 [2024-07-15 08:03:39.279349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.279388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.294600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:36:48.136 [2024-07-15 08:03:39.295853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.295904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.310559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:36:48.136 [2024-07-15 08:03:39.311794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.311838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.326358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:36:48.136 [2024-07-15 08:03:39.327599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.327641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.342031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:36:48.136 [2024-07-15 08:03:39.343270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.343313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.136 [2024-07-15 08:03:39.357839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:36:48.136 [2024-07-15 08:03:39.359118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.136 [2024-07-15 08:03:39.359156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.373772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:36:48.396 [2024-07-15 08:03:39.375027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.375065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.389559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:48.396 [2024-07-15 08:03:39.390824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.390867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.405323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:36:48.396 [2024-07-15 08:03:39.406579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.406621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.421273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:36:48.396 [2024-07-15 08:03:39.422451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.422493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.439435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:48.396 [2024-07-15 08:03:39.441672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.441715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.453978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:36:48.396 [2024-07-15 08:03:39.455603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.455646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.468210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:36:48.396 [2024-07-15 08:03:39.470715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.470757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.482854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:36:48.396 [2024-07-15 08:03:39.483891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.483934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.499087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:36:48.396 [2024-07-15 08:03:39.500344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.513935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:36:48.396 [2024-07-15 08:03:39.515250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.515293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.531751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:36:48.396 [2024-07-15 08:03:39.533235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.548101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:36:48.396 [2024-07-15 08:03:39.549754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.549797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-07-15 08:03:39.562954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:36:48.396 [2024-07-15 08:03:39.564572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.396 [2024-07-15 08:03:39.564614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:48.397 [2024-07-15 08:03:39.577613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:36:48.397 [2024-07-15 08:03:39.578641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.397 [2024-07-15 08:03:39.578683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:48.397 [2024-07-15 08:03:39.593448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:48.397 [2024-07-15 08:03:39.594461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.397 [2024-07-15 08:03:39.594504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.397 [2024-07-15 08:03:39.611633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:36:48.397 [2024-07-15 08:03:39.613661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.397 [2024-07-15 08:03:39.613704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.656 [2024-07-15 08:03:39.626446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:48.656 [2024-07-15 08:03:39.627898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.656 [2024-07-15 08:03:39.627955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:48.656 [2024-07-15 08:03:39.642377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:36:48.656 [2024-07-15 08:03:39.643694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.656 [2024-07-15 08:03:39.643737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:48.656 [2024-07-15 08:03:39.658476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:36:48.656 [2024-07-15 08:03:39.660125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.656 [2024-07-15 08:03:39.660164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:48.656 [2024-07-15 08:03:39.673118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:48.656 [2024-07-15 08:03:39.674771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.656 [2024-07-15 08:03:39.674813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:48.656 [2024-07-15 08:03:39.690609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:36:48.656 [2024-07-15 08:03:39.692474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.656 [2024-07-15 08:03:39.692516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.656 [2024-07-15 08:03:39.706790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:36:48.656 [2024-07-15 08:03:39.708816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.708859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.719633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:36:48.657 [2024-07-15 08:03:39.720856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.720908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.735419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:36:48.657 [2024-07-15 08:03:39.736629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.736672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.749966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:36:48.657 [2024-07-15 08:03:39.751169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.751212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.767501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:36:48.657 [2024-07-15 08:03:39.768949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.768989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.783909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:36:48.657 [2024-07-15 08:03:39.785540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.785583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.799899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:48.657 [2024-07-15 08:03:39.801513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.801556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.815823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:36:48.657 [2024-07-15 08:03:39.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.817421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.833866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:36:48.657 [2024-07-15 08:03:39.836513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.836556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.844935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:36:48.657 [2024-07-15 08:03:39.846137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.846191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.859891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:36:48.657 [2024-07-15 08:03:39.861091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.861130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:48.657 [2024-07-15 08:03:39.877680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8a50 00:36:48.657 [2024-07-15 08:03:39.879243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.657 [2024-07-15 08:03:39.879301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.894625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:36:48.916 [2024-07-15 08:03:39.896321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.896365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.909725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:36:48.916 [2024-07-15 08:03:39.911340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.927176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:36:48.916 [2024-07-15 08:03:39.929049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.929087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.943510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:48.916 [2024-07-15 08:03:39.945544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.945594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.958377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:36:48.916 [2024-07-15 08:03:39.960379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.960422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.973000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:48.916 [2024-07-15 08:03:39.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.974427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:39.988859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:36:48.916 [2024-07-15 08:03:39.990284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:39.990328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.005746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:36:48.916 [2024-07-15 08:03:40.007544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.007592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.026749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:48.916 [2024-07-15 08:03:40.029511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.029568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.042475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:36:48.916 [2024-07-15 08:03:40.044355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.044399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.057246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:36:48.916 [2024-07-15 08:03:40.059190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.059245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.072403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:36:48.916 [2024-07-15 08:03:40.073646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.073690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.087620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:36:48.916 [2024-07-15 08:03:40.088905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.088988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.106054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:36:48.916 [2024-07-15 08:03:40.107495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.107538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.122598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:48.916 [2024-07-15 08:03:40.124317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.124360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:48.916 [2024-07-15 08:03:40.138224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:36:48.916 [2024-07-15 08:03:40.139862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.916 [2024-07-15 08:03:40.139924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:49.176 [2024-07-15 08:03:40.156209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:36:49.176 [2024-07-15 08:03:40.158241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.176 [2024-07-15 08:03:40.158285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.176 [2024-07-15 08:03:40.172809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:49.176 [2024-07-15 08:03:40.174823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.174867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.187978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:36:49.177 [2024-07-15 08:03:40.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.190072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.202889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:49.177 [2024-07-15 08:03:40.204305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.204362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.219361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:36:49.177 [2024-07-15 08:03:40.220636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.220688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.235554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:49.177 [2024-07-15 08:03:40.237240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.237283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.251704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:36:49.177 [2024-07-15 08:03:40.253252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.253295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.270091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:36:49.177 [2024-07-15 08:03:40.272726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.272769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.281300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:36:49.177 [2024-07-15 08:03:40.282542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.282597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.296377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:36:49.177 [2024-07-15 08:03:40.297590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.297634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.313921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:36:49.177 [2024-07-15 08:03:40.315324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.315366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.330092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:36:49.177 [2024-07-15 08:03:40.331707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.331749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.344981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:36:49.177 [2024-07-15 08:03:40.346569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.346612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.362520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:36:49.177 [2024-07-15 08:03:40.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.364421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.378739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:36:49.177 [2024-07-15 08:03:40.380777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.380821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.177 [2024-07-15 08:03:40.393708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:36:49.177 [2024-07-15 08:03:40.395738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.177 [2024-07-15 08:03:40.395793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.408437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:36:49.437 [2024-07-15 08:03:40.409789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.409832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.422786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:36:49.437 [2024-07-15 08:03:40.424257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.424299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.440334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:36:49.437 [2024-07-15 08:03:40.441992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.442031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.456581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:36:49.437 [2024-07-15 08:03:40.458381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.458425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.471339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:49.437 [2024-07-15 08:03:40.473217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.473260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.486014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:36:49.437 [2024-07-15 08:03:40.487252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.487295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:49.437 [2024-07-15 08:03:40.503656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:36:49.437 [2024-07-15 08:03:40.505654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.437 [2024-07-15 08:03:40.505697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.518330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:49.438 [2024-07-15 08:03:40.519726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.519769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.533941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:36:49.438 [2024-07-15 08:03:40.535325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.535371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.552848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:49.438 [2024-07-15 08:03:40.555284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.555328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.568071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:36:49.438 [2024-07-15 08:03:40.569960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.570000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.582823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:36:49.438 [2024-07-15 08:03:40.585676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.585719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.597839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:49.438 [2024-07-15 08:03:40.599108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.599146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.614496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:36:49.438 [2024-07-15 08:03:40.615942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.615982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.629809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:36:49.438 [2024-07-15 08:03:40.631233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.631287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.648054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:36:49.438 [2024-07-15 08:03:40.649674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.438 [2024-07-15 08:03:40.649717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:49.438 [2024-07-15 08:03:40.664414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:36:49.696 [2024-07-15 08:03:40.666102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.696 [2024-07-15 08:03:40.666141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:49.696 [2024-07-15 08:03:40.681089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:36:49.696 [2024-07-15 08:03:40.682914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.696 [2024-07-15 08:03:40.682970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:49.696 [2024-07-15 08:03:40.697347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:36:49.696 [2024-07-15 08:03:40.699214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.696 [2024-07-15 08:03:40.699257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:49.696 [2024-07-15 08:03:40.713443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:36:49.696 [2024-07-15 08:03:40.715295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.696 [2024-07-15 08:03:40.715339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:49.696 [2024-07-15 08:03:40.729523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:36:49.696 [2024-07-15 08:03:40.731419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.696 [2024-07-15 08:03:40.731462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:49.696 00:36:49.696 Latency(us) 00:36:49.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.696 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:49.696 nvme0n1 : 2.01 15970.82 62.39 0.00 0.00 7998.69 3519.53 20486.07 00:36:49.696 =================================================================================================================== 00:36:49.696 Total : 15970.82 62.39 0.00 0.00 7998.69 3519.53 20486.07 00:36:49.696 0 00:36:49.696 08:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.696 08:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.696 08:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.696 | .driver_specific 00:36:49.696 | .nvme_error 00:36:49.696 | .status_code 00:36:49.696 | .command_transient_transport_error' 00:36:49.696 08:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1244806 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1244806 ']' 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1244806 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1244806 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1244806' 00:36:49.955 killing process with pid 1244806 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1244806 00:36:49.955 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.955 00:36:49.955 Latency(us) 00:36:49.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.955 =================================================================================================================== 00:36:49.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.955 08:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1244806 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1245399 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1245399 /var/tmp/bperf.sock 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1245399 ']' 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:51.335 08:03:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:51.335 [2024-07-15 08:03:42.204334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:51.335 [2024-07-15 08:03:42.204473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245399 ] 00:36:51.335 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:51.335 Zero copy mechanism will not be used. 00:36:51.335 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.335 [2024-07-15 08:03:42.334237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.594 [2024-07-15 08:03:42.587674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.173 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:52.173 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:52.173 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:52.173 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:52.434 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:52.434 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.434 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.434 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.434 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.434 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.694 nvme0n1 00:36:52.694 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:52.694 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.694 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.694 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.694 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.694 08:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:52.694 Zero copy mechanism will not be used. 00:36:52.694 Running I/O for 2 seconds... 00:36:52.694 [2024-07-15 08:03:43.876245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.694 [2024-07-15 08:03:43.876769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.694 [2024-07-15 08:03:43.876821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.694 [2024-07-15 08:03:43.888670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.694 [2024-07-15 08:03:43.889114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.694 [2024-07-15 08:03:43.889173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.694 [2024-07-15 08:03:43.901481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.694 [2024-07-15 08:03:43.902023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.694 [2024-07-15 08:03:43.902062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.694 [2024-07-15 08:03:43.914461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.694 [2024-07-15 08:03:43.914964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.694 [2024-07-15 08:03:43.915004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.927613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.928062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.928103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.938288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.938689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.938747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.948703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.949118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.949178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.960035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.960540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.960584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.970857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.971329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.971387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.980939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.981078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.981117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:43.991506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:43.991944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:43.991983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.001807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.002330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.002383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.012311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.012706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.022229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.022660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.022713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.032636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.033071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.033111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.042959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.043516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.053274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.053725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.053762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.063376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.063848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.063916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.073168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.073334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.952 [2024-07-15 08:03:44.073372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.952 [2024-07-15 08:03:44.084220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.952 [2024-07-15 08:03:44.084698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.084737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.094156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.094603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.094660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.103896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.104394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.104446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.114080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.114503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.114556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.124648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.125140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.135160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.135636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.135674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.145420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.145832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.145872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.154821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.155321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.155359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.163982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.164434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.164470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.953 [2024-07-15 08:03:44.172907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.953 [2024-07-15 08:03:44.173277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.953 [2024-07-15 08:03:44.173339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.182060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.182587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.182641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.193144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.193746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.193800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.204629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.205030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.205070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.214150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.214651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.214704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.224211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.224731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.224783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.233788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.234186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.243698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.212 [2024-07-15 08:03:44.244228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.212 [2024-07-15 08:03:44.244280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.212 [2024-07-15 08:03:44.253873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.254315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.254368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.263842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.264357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.264394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.273664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.274033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.274073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.281709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.282109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.282149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.290459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.290904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.290942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.299240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.299642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.299680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.307813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.308203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.308256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.316214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.316602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.316655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.324672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.325127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.325184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.334363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.334826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.334871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.344107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.344501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.344538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.352470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.352874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.360956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.361321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.361374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.369173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.369546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.369600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.377896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.378245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.378299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.386902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.387289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.387328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.395493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.395841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.395889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.403822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.404173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.404213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.412291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.412649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.412702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.420709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.421096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.421136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.429028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.429374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.429428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.213 [2024-07-15 08:03:44.437039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.213 [2024-07-15 08:03:44.437332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.213 [2024-07-15 08:03:44.437371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.445456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.472 [2024-07-15 08:03:44.445789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.472 [2024-07-15 08:03:44.445827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.453548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.472 [2024-07-15 08:03:44.453889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.472 [2024-07-15 08:03:44.453929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.461727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.472 [2024-07-15 08:03:44.462160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.472 [2024-07-15 08:03:44.462199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.470074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.472 [2024-07-15 08:03:44.470404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.472 [2024-07-15 08:03:44.470444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.477742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.472 [2024-07-15 08:03:44.478086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.472 [2024-07-15 08:03:44.478134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.485930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.472 [2024-07-15 08:03:44.486263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.472 [2024-07-15 08:03:44.486302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.472 [2024-07-15 08:03:44.494156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.494526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.502618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.502956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.502995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.510379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.510707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.510746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.519267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.519681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.519719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.528281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.528685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.528724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.537110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.537585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.537638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.546290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.546790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.546828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.554730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.555079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.555118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.564106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.564469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.564507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.572221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.572575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.581485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.581937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.581975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.590610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.591040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.591079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.599620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.599994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.600033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.607736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.608073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.608112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.615800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.616144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.616183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.623385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.623838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.623886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.631235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.631572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.631610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.639222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.639555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.639594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.648735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.649098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.649138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.657700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.658051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.658091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.665688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.666114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.666153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.673407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.673789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.473 [2024-07-15 08:03:44.673828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.473 [2024-07-15 08:03:44.681235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.473 [2024-07-15 08:03:44.681632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.474 [2024-07-15 08:03:44.681671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.474 [2024-07-15 08:03:44.689101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.474 [2024-07-15 08:03:44.689444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.474 [2024-07-15 08:03:44.689483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.474 [2024-07-15 08:03:44.697061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.474 [2024-07-15 08:03:44.697449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.474 [2024-07-15 08:03:44.697488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.704944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.705280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.705320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.712604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.712958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.712996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.720822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.721163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.721202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.728108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.728441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.728479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.736806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.737140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.737181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.745352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.745723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.745762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.753485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.753815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.753854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.761897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.762239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.762277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.769603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.769951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.769990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.777245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.777635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.777674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.786133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.786476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.786514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.794073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.794428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.794466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.801486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.801852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.809114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.809442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.809482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.816794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.817138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.817188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.824504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.824982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.825020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.832242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.832632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.832671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.840123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.840464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.840503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.847959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.848297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.848336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.855362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.855721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.855759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.863035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.863388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.863426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.870948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.871276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.871315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.878631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.879018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.886287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.886620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.886658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.894189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.894622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.902119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.902451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.902491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.910528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.734 [2024-07-15 08:03:44.910903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.734 [2024-07-15 08:03:44.918474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.734 [2024-07-15 08:03:44.918811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.735 [2024-07-15 08:03:44.918850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.735 [2024-07-15 08:03:44.926859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.735 [2024-07-15 08:03:44.927242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.735 [2024-07-15 08:03:44.927281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.735 [2024-07-15 08:03:44.934539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.735 [2024-07-15 08:03:44.934868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.735 [2024-07-15 08:03:44.934914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.735 [2024-07-15 08:03:44.942087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.735 [2024-07-15 08:03:44.942423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.735 [2024-07-15 08:03:44.942462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.735 [2024-07-15 08:03:44.949865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.735 [2024-07-15 08:03:44.950212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.735 [2024-07-15 08:03:44.950250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.735 [2024-07-15 08:03:44.957622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.735 [2024-07-15 08:03:44.957976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.735 [2024-07-15 08:03:44.958015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:44.966101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:44.966475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:44.966522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:44.975275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:44.975669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:44.975708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:44.983559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:44.983902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:44.983948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:44.992616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:44.993105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:44.993145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.001756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.002098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.002136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.009955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.010319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.010358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.017654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.017997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.018036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.025156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.025484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.025522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.032784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.033127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.033166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.040613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.040988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.041028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.048323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.048666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.048704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.056259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.056678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.010 [2024-07-15 08:03:45.056716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.010 [2024-07-15 08:03:45.064371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.010 [2024-07-15 08:03:45.064714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.064752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.072336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.072692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.072731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.080437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.080766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.080805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.088287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.088647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.088686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.096272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.096662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.096701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.104283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.104659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.111913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.112243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.112281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.119888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.120218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.120256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.127437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.127794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.127832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.135279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.135608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.135647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.143355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.143759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.143813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.152175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.152505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.152544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.161397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.161787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.161827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.170447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.170895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.170934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.179423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.179823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.188140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.188527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.188566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.196945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.197276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.197314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.205744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.206212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.206252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.215564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.216041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.011 [2024-07-15 08:03:45.224240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.011 [2024-07-15 08:03:45.224592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.011 [2024-07-15 08:03:45.224633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.233324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.233770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.242992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.243445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.243498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.252454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.252836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.252894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.261730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.262171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.262210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.271073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.271455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.271494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.280087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.280517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.280556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.289319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.289775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.289814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.298444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.298813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.303 [2024-07-15 08:03:45.298860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.303 [2024-07-15 08:03:45.307906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.303 [2024-07-15 08:03:45.308286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.308325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.317142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.317680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.317734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.326303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.326695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.326736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.335474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.335856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.335914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.344577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.345001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.345042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.353783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.354205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.354247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.363244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.363579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.363619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.372571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.372958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.372998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.381856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.382328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.382367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.390782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.391188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.391229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.400089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.400516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.400556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.408925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.409304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.409355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.418262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.418704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.418745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.427288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.427716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.427756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.436464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.436962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.445685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.446134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.446175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.454744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.455184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.455224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.464001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.464426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.464481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.473329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.473707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.473747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.482535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.482938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.482978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.491619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.492078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.492118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.500737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.501160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.501201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.509996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.510414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.518975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.519403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.519443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.304 [2024-07-15 08:03:45.528427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.304 [2024-07-15 08:03:45.528847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.304 [2024-07-15 08:03:45.528895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.564 [2024-07-15 08:03:45.537505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.564 [2024-07-15 08:03:45.537917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.564 [2024-07-15 08:03:45.537958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.564 [2024-07-15 08:03:45.546309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.564 [2024-07-15 08:03:45.546694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.564 [2024-07-15 08:03:45.546734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.564 [2024-07-15 08:03:45.555659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.564 [2024-07-15 08:03:45.556027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.564 [2024-07-15 08:03:45.556068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.564 [2024-07-15 08:03:45.564589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.564 [2024-07-15 08:03:45.565009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.565050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.573594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.574008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.574047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.582718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.583162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.583203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.591871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.592260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.592299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.601109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.601521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.601560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.610255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.610718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.610758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.619624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.620029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.628984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.629360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.629401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.638671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.639046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.639085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.647672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.648056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.648096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.655839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.656289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.656343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.664239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.664591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.664631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.672225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.672567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.672607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.680254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.680633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.680673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.689544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.690025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.690065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.698889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.699269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.699309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.707928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.708267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.708306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.715735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.716081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.723782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.724194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.724235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.731734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.732103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.732153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.739831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.740273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.740313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.747630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.748014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.748055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.755053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.755388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.755428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.763342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.763678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.763718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.770722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.771074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.771113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.778220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.778579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.778619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.565 [2024-07-15 08:03:45.785961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.565 [2024-07-15 08:03:45.786295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.565 [2024-07-15 08:03:45.786344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.793526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.793861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.793909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.800869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.801251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.801290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.808650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.809042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.809082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.816498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.816867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.816925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.824242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.824574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.824613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.831867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.832227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.832266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.839615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.839957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.839997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.847548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.847900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.847951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.855458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.855829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.855869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.824 [2024-07-15 08:03:45.863642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.824 [2024-07-15 08:03:45.863848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.824 [2024-07-15 08:03:45.863895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.824 00:36:54.824 Latency(us) 00:36:54.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.824 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:54.824 nvme0n1 : 2.00 3511.03 438.88 0.00 0.00 4543.72 3470.98 16117.00 00:36:54.824 =================================================================================================================== 00:36:54.824 Total : 3511.03 438.88 0.00 0.00 4543.72 3470.98 16117.00 00:36:54.824 0 00:36:54.824 08:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.824 08:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.824 08:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.824 08:03:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.824 | .driver_specific 00:36:54.824 | .nvme_error 00:36:54.824 | .status_code 00:36:54.824 | .command_transient_transport_error' 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 227 > 0 )) 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1245399 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1245399 ']' 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1245399 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1245399 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1245399' 00:36:55.084 killing process with pid 1245399 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1245399 00:36:55.084 Received shutdown signal, test time was about 2.000000 seconds 00:36:55.084 00:36:55.084 Latency(us) 00:36:55.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.084 =================================================================================================================== 00:36:55.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:55.084 08:03:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1245399 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1243377 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1243377 ']' 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1243377 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243377 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243377' 00:36:56.466 killing process with pid 1243377 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1243377 00:36:56.466 08:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1243377 00:36:57.845 00:36:57.845 real 0m23.624s 00:36:57.845 user 0m45.636s 00:36:57.845 sys 0m4.628s 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.845 ************************************ 00:36:57.845 END TEST nvmf_digest_error 00:36:57.845 ************************************ 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:57.845 rmmod nvme_tcp 00:36:57.845 rmmod nvme_fabrics 00:36:57.845 rmmod nvme_keyring 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1243377 ']' 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1243377 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1243377 ']' 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1243377 00:36:57.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1243377) - No such process 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1243377 is not found' 00:36:57.845 Process with pid 1243377 is not found 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:57.845 08:03:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.748 08:03:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:59.748 00:36:59.748 real 0m52.494s 00:36:59.748 user 1m34.209s 00:36:59.748 sys 0m10.828s 00:36:59.748 08:03:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:59.748 08:03:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.748 ************************************ 00:36:59.748 END TEST nvmf_digest 00:36:59.748 ************************************ 00:36:59.748 08:03:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:59.748 08:03:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:59.748 08:03:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:59.748 08:03:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:59.748 08:03:50 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:59.748 08:03:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:59.748 08:03:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:59.748 08:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:59.748 ************************************ 00:36:59.748 START TEST nvmf_bdevperf 00:36:59.748 ************************************ 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:59.748 * Looking for test storage... 00:36:59.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:59.748 08:03:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:01.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:01.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:01.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.646 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:01.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.647 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:01.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:37:01.904 00:37:01.904 --- 10.0.0.2 ping statistics --- 00:37:01.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.904 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:37:01.904 00:37:01.904 --- 10.0.0.1 ping statistics --- 00:37:01.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.904 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1248130 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1248130 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1248130 ']' 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:01.904 08:03:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.904 [2024-07-15 08:03:53.079017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:01.904 [2024-07-15 08:03:53.079167] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.160 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.160 [2024-07-15 08:03:53.208763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:02.417 [2024-07-15 08:03:53.460873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.417 [2024-07-15 08:03:53.460956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.417 [2024-07-15 08:03:53.460991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.417 [2024-07-15 08:03:53.461013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.417 [2024-07-15 08:03:53.461035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.417 [2024-07-15 08:03:53.461382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.417 [2024-07-15 08:03:53.461471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.417 [2024-07-15 08:03:53.461480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.982 [2024-07-15 08:03:54.036402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.982 Malloc0 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.982 [2024-07-15 08:03:54.155813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:02.982 { 00:37:02.982 "params": { 00:37:02.982 "name": "Nvme$subsystem", 00:37:02.982 "trtype": "$TEST_TRANSPORT", 00:37:02.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.982 "adrfam": "ipv4", 00:37:02.982 "trsvcid": "$NVMF_PORT", 00:37:02.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.982 "hdgst": ${hdgst:-false}, 00:37:02.982 "ddgst": ${ddgst:-false} 00:37:02.982 }, 00:37:02.982 "method": "bdev_nvme_attach_controller" 00:37:02.982 } 00:37:02.982 EOF 00:37:02.982 )") 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:02.982 08:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:02.982 "params": { 00:37:02.982 "name": "Nvme1", 00:37:02.982 "trtype": "tcp", 00:37:02.982 "traddr": "10.0.0.2", 00:37:02.982 "adrfam": "ipv4", 00:37:02.982 "trsvcid": "4420", 00:37:02.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:02.982 "hdgst": false, 00:37:02.982 "ddgst": false 00:37:02.982 }, 00:37:02.982 "method": "bdev_nvme_attach_controller" 00:37:02.982 }' 00:37:03.242 [2024-07-15 08:03:54.240189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:03.242 [2024-07-15 08:03:54.240325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248288 ] 00:37:03.242 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.242 [2024-07-15 08:03:54.363203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.502 [2024-07-15 08:03:54.599327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.067 Running I/O for 1 seconds... 00:37:04.999 00:37:04.999 Latency(us) 00:37:04.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.999 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:04.999 Verification LBA range: start 0x0 length 0x4000 00:37:04.999 Nvme1n1 : 1.01 6213.92 24.27 0.00 0.00 20504.05 1820.44 16699.54 00:37:04.999 =================================================================================================================== 00:37:04.999 Total : 6213.92 24.27 0.00 0.00 20504.05 1820.44 16699.54 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1248559 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:05.937 { 00:37:05.937 "params": { 00:37:05.937 "name": "Nvme$subsystem", 00:37:05.937 "trtype": "$TEST_TRANSPORT", 00:37:05.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.937 "adrfam": "ipv4", 00:37:05.937 "trsvcid": "$NVMF_PORT", 00:37:05.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.937 "hdgst": ${hdgst:-false}, 00:37:05.937 "ddgst": ${ddgst:-false} 00:37:05.937 }, 00:37:05.937 "method": "bdev_nvme_attach_controller" 00:37:05.937 } 00:37:05.937 EOF 00:37:05.937 )") 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:05.937 08:03:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:05.937 "params": { 00:37:05.937 "name": "Nvme1", 00:37:05.937 "trtype": "tcp", 00:37:05.937 "traddr": "10.0.0.2", 00:37:05.937 "adrfam": "ipv4", 00:37:05.937 "trsvcid": "4420", 00:37:05.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.937 "hdgst": false, 00:37:05.937 "ddgst": false 00:37:05.937 }, 00:37:05.937 "method": "bdev_nvme_attach_controller" 00:37:05.937 }' 00:37:05.937 [2024-07-15 08:03:57.151007] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:05.937 [2024-07-15 08:03:57.151151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248559 ] 00:37:06.195 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.195 [2024-07-15 08:03:57.280516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.452 [2024-07-15 08:03:57.512143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.018 Running I/O for 15 seconds... 00:37:08.919 08:04:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1248130 00:37:08.919 08:04:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:08.919 [2024-07-15 08:04:00.095748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.919 [2024-07-15 08:04:00.095824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.919 [2024-07-15 08:04:00.095907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.919 [2024-07-15 08:04:00.095958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.919 [2024-07-15 08:04:00.095988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.919 [2024-07-15 08:04:00.096010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.919 [2024-07-15 08:04:00.096035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.919 [2024-07-15 08:04:00.096058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.919 [2024-07-15 08:04:00.096083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.919 [2024-07-15 08:04:00.096106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.919 [2024-07-15 08:04:00.096130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.919 [2024-07-15 08:04:00.096176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.919 [2024-07-15 08:04:00.096201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.096959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.096980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.920 [2024-07-15 08:04:00.097025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.097953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.097976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.920 [2024-07-15 08:04:00.098400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.920 [2024-07-15 08:04:00.098424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.098964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.098989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.099954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.099977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.921 [2024-07-15 08:04:00.100629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.921 [2024-07-15 08:04:00.100657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.100684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.100708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.100734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.100757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.100795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.100819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.100884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.100928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.922 [2024-07-15 08:04:00.100951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.100976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.922 [2024-07-15 08:04:00.100997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.101965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.101987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.922 [2024-07-15 08:04:00.102643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.102668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:37:08.922 [2024-07-15 08:04:00.102701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:08.922 [2024-07-15 08:04:00.102723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:08.922 [2024-07-15 08:04:00.102745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:37:08.922 [2024-07-15 08:04:00.102768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.103091] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:37:08.922 [2024-07-15 08:04:00.103208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.922 [2024-07-15 08:04:00.103257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.103281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.922 [2024-07-15 08:04:00.103319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.922 [2024-07-15 08:04:00.103343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.922 [2024-07-15 08:04:00.103365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.923 [2024-07-15 08:04:00.103388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.923 [2024-07-15 08:04:00.103409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.923 [2024-07-15 08:04:00.103430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.923 [2024-07-15 08:04:00.107945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.923 [2024-07-15 08:04:00.108010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.923 [2024-07-15 08:04:00.108821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.923 [2024-07-15 08:04:00.108872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.923 [2024-07-15 08:04:00.108927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.923 [2024-07-15 08:04:00.109234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.923 [2024-07-15 08:04:00.109524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.923 [2024-07-15 08:04:00.109557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.923 [2024-07-15 08:04:00.109583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.923 [2024-07-15 08:04:00.113765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.923 [2024-07-15 08:04:00.122849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.923 [2024-07-15 08:04:00.123375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.923 [2024-07-15 08:04:00.123417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.923 [2024-07-15 08:04:00.123443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.923 [2024-07-15 08:04:00.123735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.923 [2024-07-15 08:04:00.124038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.923 [2024-07-15 08:04:00.124071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.923 [2024-07-15 08:04:00.124093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.923 [2024-07-15 08:04:00.128245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.923 [2024-07-15 08:04:00.137355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.923 [2024-07-15 08:04:00.137856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.923 [2024-07-15 08:04:00.137909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.923 [2024-07-15 08:04:00.137935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.923 [2024-07-15 08:04:00.138223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.923 [2024-07-15 08:04:00.138511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.923 [2024-07-15 08:04:00.138543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.923 [2024-07-15 08:04:00.138565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.923 [2024-07-15 08:04:00.142763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.181 [2024-07-15 08:04:00.151871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.181 [2024-07-15 08:04:00.152338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.181 [2024-07-15 08:04:00.152379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.181 [2024-07-15 08:04:00.152405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.181 [2024-07-15 08:04:00.152690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.181 [2024-07-15 08:04:00.152991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.181 [2024-07-15 08:04:00.153023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.181 [2024-07-15 08:04:00.153045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.181 [2024-07-15 08:04:00.157212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.181 [2024-07-15 08:04:00.166412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.181 [2024-07-15 08:04:00.166906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.181 [2024-07-15 08:04:00.166948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.166974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.167256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.167540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.167577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.167600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.171681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.180820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.181344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.181385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.181411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.181693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.181999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.182032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.182054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.186135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.195297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.195781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.195822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.195847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.196141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.196428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.196460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.196481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.200575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.209765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.210257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.210298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.210324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.210607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.210905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.210937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.210959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.215053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.224208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.224692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.224734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.224760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.225057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.225344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.225375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.225396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.229491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.238663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.239141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.239182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.239208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.239490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.239775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.239806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.239828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.243920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.253106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.253697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.253756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.253782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.254074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.254359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.254390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.254412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.258497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.267672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.268178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.268219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.268251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.268536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.268822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.268854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.268886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.273002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.282152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.282650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.282690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.282716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.283010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.283294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.283325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.283347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.287429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.296549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.297050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.297091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.297117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.297398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.297681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.297712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.297734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.301806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.182 [2024-07-15 08:04:00.310953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.182 [2024-07-15 08:04:00.311436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.182 [2024-07-15 08:04:00.311477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.182 [2024-07-15 08:04:00.311502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.182 [2024-07-15 08:04:00.311782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.182 [2024-07-15 08:04:00.312079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.182 [2024-07-15 08:04:00.312116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.182 [2024-07-15 08:04:00.312139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.182 [2024-07-15 08:04:00.316213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.183 [2024-07-15 08:04:00.325334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.183 [2024-07-15 08:04:00.325835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.183 [2024-07-15 08:04:00.325884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.183 [2024-07-15 08:04:00.325926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.183 [2024-07-15 08:04:00.326210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.183 [2024-07-15 08:04:00.326495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.183 [2024-07-15 08:04:00.326526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.183 [2024-07-15 08:04:00.326548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.183 [2024-07-15 08:04:00.330619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.183 [2024-07-15 08:04:00.339742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.183 [2024-07-15 08:04:00.340243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.183 [2024-07-15 08:04:00.340283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.183 [2024-07-15 08:04:00.340309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.183 [2024-07-15 08:04:00.340590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.183 [2024-07-15 08:04:00.340886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.183 [2024-07-15 08:04:00.340917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.183 [2024-07-15 08:04:00.340939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.183 [2024-07-15 08:04:00.345000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.183 [2024-07-15 08:04:00.354355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.183 [2024-07-15 08:04:00.354821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.183 [2024-07-15 08:04:00.354862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.183 [2024-07-15 08:04:00.354898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.183 [2024-07-15 08:04:00.355182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.183 [2024-07-15 08:04:00.355466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.183 [2024-07-15 08:04:00.355497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.183 [2024-07-15 08:04:00.355519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.183 [2024-07-15 08:04:00.359577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.183 [2024-07-15 08:04:00.368923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.183 [2024-07-15 08:04:00.369409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.183 [2024-07-15 08:04:00.369451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.183 [2024-07-15 08:04:00.369477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.183 [2024-07-15 08:04:00.369759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.183 [2024-07-15 08:04:00.370056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.183 [2024-07-15 08:04:00.370088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.183 [2024-07-15 08:04:00.370110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.183 [2024-07-15 08:04:00.374189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.183 [2024-07-15 08:04:00.383299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.183 [2024-07-15 08:04:00.383785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.183 [2024-07-15 08:04:00.383825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.183 [2024-07-15 08:04:00.383851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.183 [2024-07-15 08:04:00.384144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.183 [2024-07-15 08:04:00.384428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.183 [2024-07-15 08:04:00.384459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.183 [2024-07-15 08:04:00.384481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.183 [2024-07-15 08:04:00.388537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.183 [2024-07-15 08:04:00.397871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.183 [2024-07-15 08:04:00.398358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.183 [2024-07-15 08:04:00.398399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.183 [2024-07-15 08:04:00.398424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.183 [2024-07-15 08:04:00.398706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.183 [2024-07-15 08:04:00.399004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.183 [2024-07-15 08:04:00.399036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.183 [2024-07-15 08:04:00.399058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.183 [2024-07-15 08:04:00.403122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.443 [2024-07-15 08:04:00.412455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.443 [2024-07-15 08:04:00.412974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.443 [2024-07-15 08:04:00.413016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.443 [2024-07-15 08:04:00.413047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.443 [2024-07-15 08:04:00.413330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.443 [2024-07-15 08:04:00.413613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.443 [2024-07-15 08:04:00.413645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.443 [2024-07-15 08:04:00.413667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.443 [2024-07-15 08:04:00.417723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.443 [2024-07-15 08:04:00.427058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.443 [2024-07-15 08:04:00.427515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.443 [2024-07-15 08:04:00.427556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.443 [2024-07-15 08:04:00.427582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.443 [2024-07-15 08:04:00.427863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.443 [2024-07-15 08:04:00.428158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.443 [2024-07-15 08:04:00.428188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.443 [2024-07-15 08:04:00.428210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.443 [2024-07-15 08:04:00.432280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.443 [2024-07-15 08:04:00.441629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.443 [2024-07-15 08:04:00.442104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.443 [2024-07-15 08:04:00.442146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.443 [2024-07-15 08:04:00.442171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.443 [2024-07-15 08:04:00.442451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.443 [2024-07-15 08:04:00.442735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.443 [2024-07-15 08:04:00.442765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.443 [2024-07-15 08:04:00.442787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.443 [2024-07-15 08:04:00.446858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.443 [2024-07-15 08:04:00.456214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.443 [2024-07-15 08:04:00.456713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.443 [2024-07-15 08:04:00.456754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.443 [2024-07-15 08:04:00.456780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.443 [2024-07-15 08:04:00.457074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.443 [2024-07-15 08:04:00.457358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.443 [2024-07-15 08:04:00.457396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.443 [2024-07-15 08:04:00.457419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.443 [2024-07-15 08:04:00.461478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.443 [2024-07-15 08:04:00.470601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.443 [2024-07-15 08:04:00.471103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.443 [2024-07-15 08:04:00.471143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.443 [2024-07-15 08:04:00.471168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.443 [2024-07-15 08:04:00.471448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.443 [2024-07-15 08:04:00.471731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.443 [2024-07-15 08:04:00.471762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.443 [2024-07-15 08:04:00.471784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.443 [2024-07-15 08:04:00.475849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.484986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.485477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.485518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.485544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.485825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.486123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.486155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.486176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.490237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.499556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.500038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.500078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.500104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.500385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.500669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.500700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.500721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.504783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.514143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.514623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.514665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.514690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.514986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.515271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.515302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.515323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.519386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.528489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.528957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.528998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.529023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.529305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.529603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.529634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.529655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.533712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.543065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.543548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.543588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.543614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.543906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.544190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.544221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.544243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.548299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.557630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.558142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.558183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.558215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.558496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.558780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.558811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.558833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.562919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.572039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.572511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.572552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.572578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.572858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.573155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.573186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.573208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.577276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.586414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.586870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.586945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.587226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.587509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.587541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.587562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.591630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.600980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.601526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.601567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.601593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.601874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.602168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.602208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.602231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.606288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.615416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.615924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.615965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.615991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.444 [2024-07-15 08:04:00.616272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.444 [2024-07-15 08:04:00.616555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.444 [2024-07-15 08:04:00.616586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.444 [2024-07-15 08:04:00.616607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.444 [2024-07-15 08:04:00.620665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.444 [2024-07-15 08:04:00.630013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.444 [2024-07-15 08:04:00.630465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.444 [2024-07-15 08:04:00.630506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.444 [2024-07-15 08:04:00.630532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.445 [2024-07-15 08:04:00.630812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.445 [2024-07-15 08:04:00.631109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.445 [2024-07-15 08:04:00.631141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.445 [2024-07-15 08:04:00.631162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.445 [2024-07-15 08:04:00.635225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.445 [2024-07-15 08:04:00.644561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.445 [2024-07-15 08:04:00.645059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.445 [2024-07-15 08:04:00.645100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.445 [2024-07-15 08:04:00.645126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.445 [2024-07-15 08:04:00.645406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.445 [2024-07-15 08:04:00.645690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.445 [2024-07-15 08:04:00.645722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.445 [2024-07-15 08:04:00.645743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.445 [2024-07-15 08:04:00.649814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.445 [2024-07-15 08:04:00.658941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.445 [2024-07-15 08:04:00.659490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.445 [2024-07-15 08:04:00.659530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.445 [2024-07-15 08:04:00.659557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.445 [2024-07-15 08:04:00.659847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.445 [2024-07-15 08:04:00.660138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.445 [2024-07-15 08:04:00.660170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.445 [2024-07-15 08:04:00.660192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.445 [2024-07-15 08:04:00.664245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.673350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.673810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.673850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.673884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.674169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.674451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.674483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.674505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.678605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.687740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.688239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.688281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.688307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.688589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.688873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.688914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.688940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.693031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.702255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.702735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.702775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.702807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.703100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.703398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.703429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.703451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.707563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.716448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.716912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.716956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.716979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.717264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.717500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.717525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.717544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.721337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.731050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.731633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.731673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.731709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.732017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.732298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.732331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.732368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.736312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.745520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.746001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.746043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.746069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.746353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.746659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.746691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.746713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.750817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.760001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.760479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.760519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.760545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.760828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.761125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.761157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.761179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.765296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.774523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.774997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.775040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.775066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.775349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.775638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.775669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.775691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.779787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.789025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.789501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.789568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.789851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.790146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.790178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.790200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.794301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.732 [2024-07-15 08:04:00.803498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.732 [2024-07-15 08:04:00.803993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.732 [2024-07-15 08:04:00.804034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.732 [2024-07-15 08:04:00.804061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.732 [2024-07-15 08:04:00.804347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.732 [2024-07-15 08:04:00.804634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.732 [2024-07-15 08:04:00.804665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.732 [2024-07-15 08:04:00.804687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.732 [2024-07-15 08:04:00.808805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.818081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.818578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.818618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.818645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.818949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.819240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.819272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.819294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.823428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.832683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.833200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.833241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.833267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.833550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.833838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.833869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.833904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.838033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.847220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.847707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.847747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.847779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.848073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.848360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.848392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.848414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.852550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.861737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.862249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.862290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.862316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.862600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.862896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.862928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.862950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.867045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.876240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.876768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.876826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.876851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.877143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.877430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.877461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.877483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.881593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.890783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.891293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.891334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.891360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.891643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.891948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.891980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.892003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.896101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.905280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.905760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.905800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.905826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.906120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.906406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.906438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.906461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.910559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.919734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.920339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.920380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.920406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.920688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.920985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.921017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.921040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.925131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.733 [2024-07-15 08:04:00.934313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.733 [2024-07-15 08:04:00.934792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.733 [2024-07-15 08:04:00.934833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.733 [2024-07-15 08:04:00.934859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.733 [2024-07-15 08:04:00.935153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.733 [2024-07-15 08:04:00.935439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.733 [2024-07-15 08:04:00.935470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.733 [2024-07-15 08:04:00.935492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.733 [2024-07-15 08:04:00.939618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:00.948853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:00.949369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:00.949410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:00.949436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:00.949719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:00.950021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:00.950053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:00.950075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:00.954203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:00.963443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:00.963933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:00.963974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:00.964000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:00.964283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:00.964567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:00.964599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:00.964621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:00.968745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:00.977920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:00.978403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:00.978443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:00.978469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:00.978751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:00.979050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:00.979082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:00.979104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:00.983205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:00.992346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:00.992843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:00.992898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:00.992927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:00.993222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:00.993508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:00.993540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:00.993562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:00.997656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.006834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.007299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.007341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.007367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.007650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.007949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.007981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.008003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.012089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.021208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.021684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.021725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.021751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.022046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.022332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.022364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.022385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.026460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.035600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.036135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.036176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.036202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.036483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.036772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.036805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.036827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.040931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.050055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.050527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.050568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.050594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.050886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.051171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.051203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.051226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.055288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.064632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.065145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.065186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.065212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.065493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.065776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.065808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.065830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.069905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.079031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.079519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.079559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.079585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.079866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.080163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.080195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.080217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.084296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.093420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.093893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.093935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.093961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.094243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.094527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.094559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.094582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.098649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.107855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.108445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.108506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.108533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.108814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.109108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.109141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.109163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.113480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.122368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.122916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.122968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.123003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.123286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.123570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.123602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.123623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.127691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.136829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.137390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.137454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.137481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.137764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.138058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.138090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.138112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.142193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.151315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.151804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.151888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.151917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.152207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.152491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.152523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.152545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.156609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.165711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.166210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.166251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.166277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.166558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.166842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.166873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.166921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.170990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.180092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.180587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.180627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.180653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.180947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.181237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.181269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.181291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.185355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.194473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.194969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.195010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.195036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.195318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.195602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.195633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.195655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.199724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.995 [2024-07-15 08:04:01.208866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.995 [2024-07-15 08:04:01.209353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.995 [2024-07-15 08:04:01.209395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.995 [2024-07-15 08:04:01.209421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.995 [2024-07-15 08:04:01.209701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.995 [2024-07-15 08:04:01.210008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.995 [2024-07-15 08:04:01.210041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.995 [2024-07-15 08:04:01.210063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.995 [2024-07-15 08:04:01.214133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.223262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.223760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.223801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.223826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.224121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.224406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.224438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.224466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.228536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.237646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.238153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.238193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.238219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.238501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.238785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.238816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.238838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.242910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.252238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.252712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.252754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.252780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.253073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.253358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.253390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.253411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.257484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.266821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.267294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.267334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.267360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.267641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.267938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.267971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.267992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.272058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.281381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.281855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.281909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.281937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.282218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.282503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.282535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.282556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.286628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.295968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.296468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.296509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.296534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.296815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.297140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.297172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.297194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.301249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.310361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.310825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.310865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.310905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.311186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.311470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.311501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.311523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.315588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.324944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.325430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.325470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.325496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.325777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.326081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.326113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.326134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.330215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.339329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.339798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.339838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.339864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.340157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.340442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.340473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.340494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.256 [2024-07-15 08:04:01.344564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.256 [2024-07-15 08:04:01.353912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.256 [2024-07-15 08:04:01.354456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.256 [2024-07-15 08:04:01.354512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.256 [2024-07-15 08:04:01.354538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.256 [2024-07-15 08:04:01.354835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.256 [2024-07-15 08:04:01.355130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.256 [2024-07-15 08:04:01.355162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.256 [2024-07-15 08:04:01.355184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.359241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.368363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.368841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.368890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.368919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.369201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.369484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.369515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.369544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.373602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.382713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.383344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.383408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.383435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.383715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.384011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.384043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.384065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.388131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.397242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.397783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.397824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.397851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.398144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.398430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.398461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.398484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.402543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.411640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.412150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.412191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.412217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.412500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.412784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.412817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.412839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.416911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.426238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.426715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.426760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.426787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.427080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.427364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.427396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.427418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.431475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.440827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.441320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.441360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.441387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.441667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.441966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.441998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.442020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.446096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.455227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.455723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.455763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.455789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.456081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.456374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.456405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.456427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.460487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.469604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.257 [2024-07-15 08:04:01.470113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.257 [2024-07-15 08:04:01.470154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.257 [2024-07-15 08:04:01.470180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.257 [2024-07-15 08:04:01.470466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.257 [2024-07-15 08:04:01.470750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.257 [2024-07-15 08:04:01.470782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.257 [2024-07-15 08:04:01.470803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.257 [2024-07-15 08:04:01.474869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.257 [2024-07-15 08:04:01.483997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.516 [2024-07-15 08:04:01.484475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.516 [2024-07-15 08:04:01.484516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.516 [2024-07-15 08:04:01.484542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.516 [2024-07-15 08:04:01.484824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.516 [2024-07-15 08:04:01.485120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.516 [2024-07-15 08:04:01.485152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.516 [2024-07-15 08:04:01.485174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.516 [2024-07-15 08:04:01.489232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.516 [2024-07-15 08:04:01.498581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.516 [2024-07-15 08:04:01.499091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.516 [2024-07-15 08:04:01.499131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.516 [2024-07-15 08:04:01.499157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.516 [2024-07-15 08:04:01.499437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.516 [2024-07-15 08:04:01.499721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.516 [2024-07-15 08:04:01.499752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.516 [2024-07-15 08:04:01.499774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.516 [2024-07-15 08:04:01.503852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.516 [2024-07-15 08:04:01.513004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.516 [2024-07-15 08:04:01.513482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.516 [2024-07-15 08:04:01.513523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.516 [2024-07-15 08:04:01.513549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.516 [2024-07-15 08:04:01.513830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.516 [2024-07-15 08:04:01.514127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.516 [2024-07-15 08:04:01.514159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.516 [2024-07-15 08:04:01.514188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.518263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.527406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.527901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.527942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.527968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.528252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.528535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.528567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.528589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.532674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.541820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.542283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.542324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.542350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.542632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.542930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.542962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.542984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.547054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.556183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.556663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.556703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.556729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.557022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.557307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.557350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.557373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.561449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.570593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.571116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.571157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.571183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.571465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.571749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.571780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.571802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.575886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.585045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.585502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.585542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.585568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.585850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.586147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.586179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.586201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.590289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.599440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.599920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.599961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.599987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.600268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.600554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.600586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.600608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.604690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.613823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.614328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.614368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.614394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.614682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.614980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.615012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.615034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.619116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.628278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.628766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.628807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.628833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.629124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.629411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.629442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.629464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.633548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.642719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.643260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.643301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.643327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.643609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.643905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.643937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.643959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.648061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.657257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.657742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.657783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.657810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.658105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.517 [2024-07-15 08:04:01.658393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.517 [2024-07-15 08:04:01.658424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.517 [2024-07-15 08:04:01.658452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.517 [2024-07-15 08:04:01.662543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.517 [2024-07-15 08:04:01.671716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.517 [2024-07-15 08:04:01.672209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.517 [2024-07-15 08:04:01.672249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.517 [2024-07-15 08:04:01.672275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.517 [2024-07-15 08:04:01.672558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.518 [2024-07-15 08:04:01.672852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.518 [2024-07-15 08:04:01.672893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.518 [2024-07-15 08:04:01.672918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.518 [2024-07-15 08:04:01.677013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.518 [2024-07-15 08:04:01.686256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.518 [2024-07-15 08:04:01.686760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.518 [2024-07-15 08:04:01.686801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.518 [2024-07-15 08:04:01.686827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.518 [2024-07-15 08:04:01.687122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.518 [2024-07-15 08:04:01.687413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.518 [2024-07-15 08:04:01.687445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.518 [2024-07-15 08:04:01.687467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.518 [2024-07-15 08:04:01.691595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.518 [2024-07-15 08:04:01.700838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.518 [2024-07-15 08:04:01.701308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.518 [2024-07-15 08:04:01.701349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.518 [2024-07-15 08:04:01.701375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.518 [2024-07-15 08:04:01.701660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.518 [2024-07-15 08:04:01.701958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.518 [2024-07-15 08:04:01.701990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.518 [2024-07-15 08:04:01.702012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.518 [2024-07-15 08:04:01.706131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.518 [2024-07-15 08:04:01.715367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.518 [2024-07-15 08:04:01.715868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.518 [2024-07-15 08:04:01.715915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.518 [2024-07-15 08:04:01.715942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.518 [2024-07-15 08:04:01.716229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.518 [2024-07-15 08:04:01.716520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.518 [2024-07-15 08:04:01.716551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.518 [2024-07-15 08:04:01.716573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.518 [2024-07-15 08:04:01.720710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.518 [2024-07-15 08:04:01.729894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.518 [2024-07-15 08:04:01.730392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.518 [2024-07-15 08:04:01.730433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.518 [2024-07-15 08:04:01.730459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.518 [2024-07-15 08:04:01.730742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.518 [2024-07-15 08:04:01.731039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.518 [2024-07-15 08:04:01.731072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.518 [2024-07-15 08:04:01.731094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.518 [2024-07-15 08:04:01.735183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.518 [2024-07-15 08:04:01.744393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.778 [2024-07-15 08:04:01.744850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.778 [2024-07-15 08:04:01.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.778 [2024-07-15 08:04:01.744967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.778 [2024-07-15 08:04:01.745250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.778 [2024-07-15 08:04:01.745535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.778 [2024-07-15 08:04:01.745567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.778 [2024-07-15 08:04:01.745589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.778 [2024-07-15 08:04:01.749693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.778 [2024-07-15 08:04:01.758899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.778 [2024-07-15 08:04:01.759382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.778 [2024-07-15 08:04:01.759423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.778 [2024-07-15 08:04:01.759449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.778 [2024-07-15 08:04:01.759738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.778 [2024-07-15 08:04:01.760046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.778 [2024-07-15 08:04:01.760079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.778 [2024-07-15 08:04:01.760102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.778 [2024-07-15 08:04:01.764241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.778 [2024-07-15 08:04:01.773447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.778 [2024-07-15 08:04:01.773931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.778 [2024-07-15 08:04:01.773972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.778 [2024-07-15 08:04:01.773998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.778 [2024-07-15 08:04:01.774280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.778 [2024-07-15 08:04:01.774566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.778 [2024-07-15 08:04:01.774597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.778 [2024-07-15 08:04:01.774619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.778 [2024-07-15 08:04:01.778765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.778 [2024-07-15 08:04:01.787995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.778 [2024-07-15 08:04:01.788478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.778 [2024-07-15 08:04:01.788520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.778 [2024-07-15 08:04:01.788546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.778 [2024-07-15 08:04:01.788828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.778 [2024-07-15 08:04:01.789125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.778 [2024-07-15 08:04:01.789157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.778 [2024-07-15 08:04:01.789179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.778 [2024-07-15 08:04:01.793267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.778 [2024-07-15 08:04:01.802474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.778 [2024-07-15 08:04:01.802977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.778 [2024-07-15 08:04:01.803018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.778 [2024-07-15 08:04:01.803044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.778 [2024-07-15 08:04:01.803327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.778 [2024-07-15 08:04:01.803613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.778 [2024-07-15 08:04:01.803644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.778 [2024-07-15 08:04:01.803675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.778 [2024-07-15 08:04:01.807768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.778 [2024-07-15 08:04:01.816984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.778 [2024-07-15 08:04:01.817462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.778 [2024-07-15 08:04:01.817503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.778 [2024-07-15 08:04:01.817529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.778 [2024-07-15 08:04:01.817810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.778 [2024-07-15 08:04:01.818107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.778 [2024-07-15 08:04:01.818139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.778 [2024-07-15 08:04:01.818162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.778 [2024-07-15 08:04:01.822265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.778 [2024-07-15 08:04:01.831440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.831926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.831967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.831993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.832274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.832560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.832591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.832613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.836708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.845891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.846398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.846439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.846464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.846748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.847047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.847080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.847102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.851198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.860368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.860830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.860872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.860910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.861193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.861479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.861511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.861534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.865621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.874798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.875300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.875342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.875369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.875651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.875949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.875982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.876004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.880108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.889389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.889846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.889894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.889930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.890213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.890501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.890532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.890558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.894671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.903953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.904414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.904455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.904481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.904768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.905077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.905110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.905132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.909251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.918470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.918926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.918967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.918994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.919277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.919563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.919594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.919616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.923727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.932894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.933379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.933419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.933446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.933728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.934025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.934058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.934081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.938168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.947333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.947789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.947829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.947855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.948146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.948434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.948471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.948494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.952584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.961779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.962268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.962309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.962336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.962628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.962934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.962966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.962987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.967111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.779 [2024-07-15 08:04:01.976380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.779 [2024-07-15 08:04:01.976888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.779 [2024-07-15 08:04:01.976932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.779 [2024-07-15 08:04:01.976958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.779 [2024-07-15 08:04:01.977244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.779 [2024-07-15 08:04:01.977530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.779 [2024-07-15 08:04:01.977562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.779 [2024-07-15 08:04:01.977583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.779 [2024-07-15 08:04:01.981721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.780 [2024-07-15 08:04:01.990927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.780 [2024-07-15 08:04:01.991424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.780 [2024-07-15 08:04:01.991465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.780 [2024-07-15 08:04:01.991491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.780 [2024-07-15 08:04:01.991773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.780 [2024-07-15 08:04:01.992068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.780 [2024-07-15 08:04:01.992100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.780 [2024-07-15 08:04:01.992132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.780 [2024-07-15 08:04:01.996241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.780 [2024-07-15 08:04:02.005428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.780 [2024-07-15 08:04:02.005912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.780 [2024-07-15 08:04:02.005953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.780 [2024-07-15 08:04:02.005979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.006264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.006552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.006584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.006606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.010721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.019940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.020471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.020511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.020538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.020820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.021120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.021153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.021176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.025279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.034448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.034962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.035003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.035029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.035312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.035599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.035630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.035652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.039750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.048918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.049377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.049418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.049444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.049735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.050035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.050067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.050089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.054189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.063352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.063805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.063845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.063871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.064166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.064451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.064483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.064505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.068708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.077865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.078375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.078416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.078443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.078727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.079027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.079060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.079082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.083181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.092361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.092862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.092911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.092939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.093223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.093512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.093549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.093572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.097671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.106857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.107362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.040 [2024-07-15 08:04:02.107402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.040 [2024-07-15 08:04:02.107429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.040 [2024-07-15 08:04:02.107712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.040 [2024-07-15 08:04:02.108013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.040 [2024-07-15 08:04:02.108045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.040 [2024-07-15 08:04:02.108067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.040 [2024-07-15 08:04:02.112156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.040 [2024-07-15 08:04:02.121329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.040 [2024-07-15 08:04:02.121809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.121850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.121885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.122180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.122464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.122496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.122518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.126605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.135832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.136331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.136380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.136408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.136694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.136993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.137046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.137070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.141173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.150393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.150893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.150943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.150970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.151252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.151537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.151569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.151591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.155699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.164935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.165428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.165468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.165495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.165776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.166079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.166111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.166142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.170269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.179490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.179954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.179996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.180037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.180322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.180611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.180643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.180665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.184784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.194014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.194511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.194551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.194583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.194866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.195165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.195196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.195218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.199344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.208588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.209077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.209119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.209146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.209430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.209717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.209748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.209771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.213888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.223116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.223596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.223637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.223663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.223960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.224248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.224280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.224301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.228396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.237571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.238047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.238088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.238114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.238397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.238682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.238719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.238742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.242825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.252000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.252457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.252497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.252523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.252805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.253102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.253135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.253157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.041 [2024-07-15 08:04:02.257265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.041 [2024-07-15 08:04:02.266425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.041 [2024-07-15 08:04:02.266938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.041 [2024-07-15 08:04:02.266980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.041 [2024-07-15 08:04:02.267006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.041 [2024-07-15 08:04:02.267289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.041 [2024-07-15 08:04:02.267574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.041 [2024-07-15 08:04:02.267606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.041 [2024-07-15 08:04:02.267628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.271723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.280886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.281353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.281394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.281420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.281703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.282001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.282033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.282055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.286164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.295349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.295834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.295875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.295912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.296194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.296482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.296513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.296535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.300628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.309804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.310316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.310357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.310384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.310668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.310967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.311000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.311022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.315114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.324344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.324836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.324885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.324915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.325202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.325488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.325519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.325542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.329622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.338800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.339281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.339323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.339355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.339637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.339936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.339968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.339990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.344081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.353246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.353718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.353759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.353785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.354078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.354366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.354397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.354418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.358509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.367671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.368117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.368158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.368185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.368467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.368752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.368782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.368805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.372893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.382288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.382739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.382779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.382805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.383103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.383403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.383439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.383462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.387543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.396723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.397219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.397261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.397287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.397570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.397856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.301 [2024-07-15 08:04:02.397896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.301 [2024-07-15 08:04:02.397919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.301 [2024-07-15 08:04:02.402012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.301 [2024-07-15 08:04:02.411177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.301 [2024-07-15 08:04:02.411657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.301 [2024-07-15 08:04:02.411697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.301 [2024-07-15 08:04:02.411723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.301 [2024-07-15 08:04:02.412018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.301 [2024-07-15 08:04:02.412302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.412333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.412356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.416445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.425634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.426116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.426156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.426182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.426465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.426751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.426782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.426804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.430902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.440132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.440623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.440663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.440689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.440984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.441273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.441305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.441327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.445443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.454675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.455170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.455211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.455237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.455520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.455806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.455837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.455859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.459991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.469215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.469675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.469716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.469743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.470040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.470329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.470360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.470382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.474492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.483669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.484147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.484188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.484219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.484503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.484791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.484823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.484845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.488939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.498090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.498567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.498607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.498633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.498925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.499210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.499242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.499264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.503339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.512497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.512976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.513017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.513045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.513328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.513614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.513645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.513667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.302 [2024-07-15 08:04:02.517761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.302 [2024-07-15 08:04:02.526950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.302 [2024-07-15 08:04:02.527409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.302 [2024-07-15 08:04:02.527450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.302 [2024-07-15 08:04:02.527476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.302 [2024-07-15 08:04:02.527757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.302 [2024-07-15 08:04:02.528059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.302 [2024-07-15 08:04:02.528097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.302 [2024-07-15 08:04:02.528120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.561 [2024-07-15 08:04:02.532215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.561 [2024-07-15 08:04:02.541398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.561 [2024-07-15 08:04:02.541884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.561 [2024-07-15 08:04:02.541926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.561 [2024-07-15 08:04:02.541953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.561 [2024-07-15 08:04:02.542236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.561 [2024-07-15 08:04:02.542520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.561 [2024-07-15 08:04:02.542551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.561 [2024-07-15 08:04:02.542573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.561 [2024-07-15 08:04:02.546665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.561 [2024-07-15 08:04:02.555826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.561 [2024-07-15 08:04:02.556327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.561 [2024-07-15 08:04:02.556368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.561 [2024-07-15 08:04:02.556394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.561 [2024-07-15 08:04:02.556676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.561 [2024-07-15 08:04:02.556978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.561 [2024-07-15 08:04:02.557011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.561 [2024-07-15 08:04:02.557032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.561 [2024-07-15 08:04:02.561115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.561 [2024-07-15 08:04:02.570261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.561 [2024-07-15 08:04:02.570805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.561 [2024-07-15 08:04:02.570845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.561 [2024-07-15 08:04:02.570871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.561 [2024-07-15 08:04:02.571167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.561 [2024-07-15 08:04:02.571452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.561 [2024-07-15 08:04:02.571484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.561 [2024-07-15 08:04:02.571505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.561 [2024-07-15 08:04:02.575602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.561 [2024-07-15 08:04:02.584769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.561 [2024-07-15 08:04:02.585232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.561 [2024-07-15 08:04:02.585272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.561 [2024-07-15 08:04:02.585298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.561 [2024-07-15 08:04:02.585580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.561 [2024-07-15 08:04:02.585864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.561 [2024-07-15 08:04:02.585905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.561 [2024-07-15 08:04:02.585942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.561 [2024-07-15 08:04:02.590036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.561 [2024-07-15 08:04:02.599183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.561 [2024-07-15 08:04:02.599638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.561 [2024-07-15 08:04:02.599685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.561 [2024-07-15 08:04:02.599713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.561 [2024-07-15 08:04:02.600009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.561 [2024-07-15 08:04:02.600294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.561 [2024-07-15 08:04:02.600326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.561 [2024-07-15 08:04:02.600348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.561 [2024-07-15 08:04:02.604422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.561 [2024-07-15 08:04:02.613551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.561 [2024-07-15 08:04:02.614174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.561 [2024-07-15 08:04:02.614245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.614271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.614552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.614837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.614869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.614903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.618980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.628106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.628666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.628725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.628757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.629052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.629336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.629368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.629390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.633461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.642594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.643088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.643128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.643154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.643435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.643720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.643751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.643773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.647837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.656963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.657522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.657563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.657589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.657870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.658166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.658198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.658220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.662293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.671429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.671909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.671950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.671976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.672260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.672550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.672581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.672604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.676671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.685775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.686278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.686319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.686345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.686625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.686922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.686954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.686976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.691040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.700135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.700614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.700654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.700680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.700972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.701256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.701287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.701309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.705363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.714710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.715199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.715240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.715266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.715545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.715829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.715860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.715894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.719966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.729075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.729583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.729623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.729650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.729945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.730228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.730260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.730282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.734349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.743452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.743984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.744025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.744051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.744332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.744615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.744646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.562 [2024-07-15 08:04:02.744669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.562 [2024-07-15 08:04:02.748729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.562 [2024-07-15 08:04:02.757833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.562 [2024-07-15 08:04:02.758326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.562 [2024-07-15 08:04:02.758366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.562 [2024-07-15 08:04:02.758392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.562 [2024-07-15 08:04:02.758672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.562 [2024-07-15 08:04:02.758969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.562 [2024-07-15 08:04:02.759002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.563 [2024-07-15 08:04:02.759024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.563 [2024-07-15 08:04:02.763085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.563 [2024-07-15 08:04:02.772239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.563 [2024-07-15 08:04:02.772858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.563 [2024-07-15 08:04:02.772936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.563 [2024-07-15 08:04:02.772969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.563 [2024-07-15 08:04:02.773251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.563 [2024-07-15 08:04:02.773533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.563 [2024-07-15 08:04:02.773564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.563 [2024-07-15 08:04:02.773586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.563 [2024-07-15 08:04:02.777644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.563 [2024-07-15 08:04:02.786734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.563 [2024-07-15 08:04:02.787207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.563 [2024-07-15 08:04:02.787248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.563 [2024-07-15 08:04:02.787274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.563 [2024-07-15 08:04:02.787553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.563 [2024-07-15 08:04:02.787836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.563 [2024-07-15 08:04:02.787868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.563 [2024-07-15 08:04:02.787903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.791977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.801095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.801584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.801626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.801652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.801946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.802230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.802261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.823 [2024-07-15 08:04:02.802283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.806348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.815667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.816151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.816192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.816218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.816497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.816786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.816818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.823 [2024-07-15 08:04:02.816840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.820905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.830219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.830696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.830736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.830762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.831055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.831338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.831370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.823 [2024-07-15 08:04:02.831392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.835450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.844783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.845286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.845328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.845354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.845635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.845933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.845965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.823 [2024-07-15 08:04:02.845987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.850045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.859366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.859853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.859902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.859930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.860211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.860495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.860527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.823 [2024-07-15 08:04:02.860548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.864617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.873710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.874210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.874251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.874278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.874560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.874844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.874885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.823 [2024-07-15 08:04:02.874912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.823 [2024-07-15 08:04:02.878966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.823 [2024-07-15 08:04:02.888302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.823 [2024-07-15 08:04:02.888822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.823 [2024-07-15 08:04:02.888864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.823 [2024-07-15 08:04:02.888901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.823 [2024-07-15 08:04:02.889185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.823 [2024-07-15 08:04:02.889468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.823 [2024-07-15 08:04:02.889501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.889523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.893578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.902920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.903500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.903542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.903570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.903852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.904150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.904183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.904206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.908271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.917394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.917890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.917936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.917963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.918245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.918531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.918562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.918585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.922655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.931782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.932270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.932311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.932337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.932619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.932916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.932950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.932972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.937039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.946153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.946634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.946675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.946702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.946995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.947278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.947311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.947334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.951385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.960715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.961213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.961254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.961280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.961562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.961851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.961892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.961917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.965999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.975140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.975597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.975655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.975682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.975977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.976263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.976296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.976319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.980397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:02.989589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:02.990044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:02.990086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:02.990114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:02.990395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:02.990687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:02.990719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:02.990742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:02.994840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:03.003981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:03.004534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:03.004604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:03.004631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:03.004926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:03.005211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:03.005244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:03.005266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:03.009346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:03.018470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:03.018953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:03.018995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:03.019022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:03.019305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:03.019590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:03.019623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:03.019645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:03.023712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:03.032830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:03.033392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:03.033452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:03.033479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:03.033760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.824 [2024-07-15 08:04:03.034058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.824 [2024-07-15 08:04:03.034091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.824 [2024-07-15 08:04:03.034114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.824 [2024-07-15 08:04:03.038191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.824 [2024-07-15 08:04:03.047347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.824 [2024-07-15 08:04:03.047831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.824 [2024-07-15 08:04:03.047871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.824 [2024-07-15 08:04:03.047911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.824 [2024-07-15 08:04:03.048194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.825 [2024-07-15 08:04:03.048479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.825 [2024-07-15 08:04:03.048512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.825 [2024-07-15 08:04:03.048534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.084 [2024-07-15 08:04:03.052632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.084 [2024-07-15 08:04:03.061775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.084 [2024-07-15 08:04:03.062328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.084 [2024-07-15 08:04:03.062393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.084 [2024-07-15 08:04:03.062421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.084 [2024-07-15 08:04:03.062703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.084 [2024-07-15 08:04:03.063004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.084 [2024-07-15 08:04:03.063037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.084 [2024-07-15 08:04:03.063059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.084 [2024-07-15 08:04:03.067152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.084 [2024-07-15 08:04:03.076281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1248130 Killed "${NVMF_APP[@]}" "$@" 00:37:12.084 [2024-07-15 08:04:03.076831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.084 [2024-07-15 08:04:03.076874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.084 [2024-07-15 08:04:03.076911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:12.084 [2024-07-15 08:04:03.077205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:12.084 [2024-07-15 08:04:03.077491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.084 [2024-07-15 08:04:03.077524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.084 [2024-07-15 08:04:03.077546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1249341 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1249341 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1249341 ']' 00:37:12.084 [2024-07-15 08:04:03.081616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:12.084 08:04:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.084 [2024-07-15 08:04:03.090844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.084 [2024-07-15 08:04:03.091318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.084 [2024-07-15 08:04:03.091366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.084 [2024-07-15 08:04:03.091395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.084 [2024-07-15 08:04:03.091679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.084 [2024-07-15 08:04:03.091980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.084 [2024-07-15 08:04:03.092013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.084 [2024-07-15 08:04:03.092037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.084 [2024-07-15 08:04:03.096128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.084 [2024-07-15 08:04:03.105307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.084 [2024-07-15 08:04:03.105785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.084 [2024-07-15 08:04:03.105826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.084 [2024-07-15 08:04:03.105854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.084 [2024-07-15 08:04:03.106147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.084 [2024-07-15 08:04:03.106438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.084 [2024-07-15 08:04:03.106470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.084 [2024-07-15 08:04:03.106492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.084 [2024-07-15 08:04:03.110624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.084 [2024-07-15 08:04:03.119868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.084 [2024-07-15 08:04:03.120480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.084 [2024-07-15 08:04:03.120529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.084 [2024-07-15 08:04:03.120559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.120852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.121169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.121203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.121228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.125463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.134625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.135225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.135282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.135311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.135615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.135931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.135965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.135994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.140233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.149431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.149971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.150014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.150041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.150330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.150620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.150652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.150675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.154895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.164143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.164664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.164705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.164733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.165034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.165334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.165366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.165389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.169624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.178584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:12.085 [2024-07-15 08:04:03.178721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.085 [2024-07-15 08:04:03.178847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.179377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.179418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.179446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.179736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.180049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.180082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.180105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.184355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.193597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.194119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.194169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.194195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.194492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.194783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.194815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.194838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.199066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.208163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.208675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.208727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.208754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.209070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.209361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.209394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.209417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.213585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.222756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.223272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.223352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.223642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.223950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.223993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.224017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.228253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.237494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.237999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.238042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.238069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.238360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.238651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.238684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.238707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.242928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.252137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.252645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.252697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.252724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.253022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.253311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.253343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.253365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.257560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.085 [2024-07-15 08:04:03.266631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.085 [2024-07-15 08:04:03.267155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.085 [2024-07-15 08:04:03.267205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.085 [2024-07-15 08:04:03.267241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.085 [2024-07-15 08:04:03.267526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.085 [2024-07-15 08:04:03.267813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.085 [2024-07-15 08:04:03.267846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.085 [2024-07-15 08:04:03.267869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.085 [2024-07-15 08:04:03.272019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.086 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.086 [2024-07-15 08:04:03.281297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.086 [2024-07-15 08:04:03.281799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.086 [2024-07-15 08:04:03.281855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.086 [2024-07-15 08:04:03.281913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.086 [2024-07-15 08:04:03.282206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.086 [2024-07-15 08:04:03.282494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.086 [2024-07-15 08:04:03.282527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.086 [2024-07-15 08:04:03.282550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.086 [2024-07-15 08:04:03.286719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.086 [2024-07-15 08:04:03.295822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.086 [2024-07-15 08:04:03.296339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.086 [2024-07-15 08:04:03.296391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.086 [2024-07-15 08:04:03.296418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.086 [2024-07-15 08:04:03.296704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.086 [2024-07-15 08:04:03.297007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.086 [2024-07-15 08:04:03.297039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.086 [2024-07-15 08:04:03.297073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.086 [2024-07-15 08:04:03.301251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.086 [2024-07-15 08:04:03.310385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.086 [2024-07-15 08:04:03.310925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.086 [2024-07-15 08:04:03.310977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.086 [2024-07-15 08:04:03.311004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.086 [2024-07-15 08:04:03.311300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.086 [2024-07-15 08:04:03.311592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.086 [2024-07-15 08:04:03.311624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.086 [2024-07-15 08:04:03.311646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.315824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.324964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.325458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.325500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.325527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.325812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.326122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.326155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.326189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.330359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.339418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.339893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.339946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.339973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.340257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.340548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.340581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.340605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.341721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:12.345 [2024-07-15 08:04:03.344759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.353945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.354587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.354641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.354672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.354985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.355282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.355317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.355343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.359577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.368565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.369075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.369120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.369148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.369441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.369737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.369771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.369801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.374024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.383265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.383775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.383817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.383844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.384147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.384438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.384472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.384495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.388680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.397807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.398291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.398332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.398359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.398645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.398948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.398982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.399006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.403151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.412424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.412912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.412954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.412981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.413268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.413559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.413607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.413631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.417792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.426931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.427439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.427481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.427508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.427796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.428105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.428141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.428164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.432355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.441571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.442050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.442092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.442119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.442415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.442710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.442743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.442765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.446978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.456143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.456644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.345 [2024-07-15 08:04:03.456686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.345 [2024-07-15 08:04:03.456713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.345 [2024-07-15 08:04:03.457015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.345 [2024-07-15 08:04:03.457307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.345 [2024-07-15 08:04:03.457340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.345 [2024-07-15 08:04:03.457364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.345 [2024-07-15 08:04:03.461538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.345 [2024-07-15 08:04:03.470660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.345 [2024-07-15 08:04:03.471192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.471234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.471260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.471558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.471852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.471897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.471922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.476131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.485354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.486080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.486139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.486172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.486480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.486784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.486818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.486845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.491095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.500135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.500702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.500745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.500772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.501083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.501381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.501415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.501438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.505697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.514684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.515168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.515211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.515239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.515531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.515823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.515856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.515897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.520081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.529171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.529677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.529719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.529746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.530049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.530337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.530371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.530394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.534518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.543758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.544277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.544320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.544347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.544632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.544937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.544971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.544994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.549156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.558261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.558762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.558804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.558832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.346 [2024-07-15 08:04:03.559131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.346 [2024-07-15 08:04:03.559423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.346 [2024-07-15 08:04:03.559457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.346 [2024-07-15 08:04:03.559480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.346 [2024-07-15 08:04:03.563693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.346 [2024-07-15 08:04:03.572894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.346 [2024-07-15 08:04:03.573385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.346 [2024-07-15 08:04:03.573428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.346 [2024-07-15 08:04:03.573455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.604 [2024-07-15 08:04:03.573747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.604 [2024-07-15 08:04:03.574054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.604 [2024-07-15 08:04:03.574091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.604 [2024-07-15 08:04:03.574115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.604 [2024-07-15 08:04:03.578293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.587365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.587865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.587915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.587943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.588228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.588518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.588551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.588575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.592730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.602036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.602533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.602574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.602600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.602899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.603190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.603225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.603248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.607430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.608070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.605 [2024-07-15 08:04:03.608116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.605 [2024-07-15 08:04:03.608150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.605 [2024-07-15 08:04:03.608171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.605 [2024-07-15 08:04:03.608199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.605 [2024-07-15 08:04:03.608332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.605 [2024-07-15 08:04:03.608378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.605 [2024-07-15 08:04:03.608388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:12.605 [2024-07-15 08:04:03.616656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.617358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.617415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.617447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.617751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.618066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.618101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.618128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.622421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.631465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.632141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.632198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.632231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.632536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.632842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.632887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.632917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.637191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.646155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.646625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.646666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.646693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.646995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.647289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.647322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.647346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.651539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.660696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.661170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.661213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.661240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.661529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.661823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.661857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.661889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.666078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.675375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.675872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.675921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.675948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.676234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.676527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.676561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.676585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.680814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.690046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.690558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.690604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.690633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.690938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.691238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.691271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.691297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.695545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.704852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.605 [2024-07-15 08:04:03.705658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.605 [2024-07-15 08:04:03.705719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.605 [2024-07-15 08:04:03.705763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.605 [2024-07-15 08:04:03.706083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.605 [2024-07-15 08:04:03.706389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.605 [2024-07-15 08:04:03.706424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.605 [2024-07-15 08:04:03.706449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.605 [2024-07-15 08:04:03.710701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.605 [2024-07-15 08:04:03.719676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.720428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.720490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.720524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.720832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.721143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.721179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.721205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.725430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.734416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.734942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.734984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.735011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.735302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.735597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.735630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.735654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.739889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.749188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.749692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.749734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.749760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.750070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.750366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.750404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.750430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.754677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.763964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.764404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.764446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.764473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.764760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.765068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.765102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.765126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.769335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.778421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.778928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.778971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.778998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.779284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.779575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.779607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.779629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.783788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.793015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.793478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.793520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.793547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.793831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.794130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.794164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.794188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.798287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.807478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.807943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.807984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.808012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.808298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.808587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.808620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.808643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.812762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.606 [2024-07-15 08:04:03.822009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.606 [2024-07-15 08:04:03.822509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.606 [2024-07-15 08:04:03.822550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.606 [2024-07-15 08:04:03.822577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.606 [2024-07-15 08:04:03.822862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.606 [2024-07-15 08:04:03.823160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.606 [2024-07-15 08:04:03.823192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.606 [2024-07-15 08:04:03.823215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.606 [2024-07-15 08:04:03.827325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.866 [2024-07-15 08:04:03.836539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.866 [2024-07-15 08:04:03.837024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.866 [2024-07-15 08:04:03.837066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.866 [2024-07-15 08:04:03.837092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.866 [2024-07-15 08:04:03.837376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.866 [2024-07-15 08:04:03.837664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.866 [2024-07-15 08:04:03.837696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.866 [2024-07-15 08:04:03.837719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.866 [2024-07-15 08:04:03.841825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.866 [2024-07-15 08:04:03.851085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.851655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.851702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.851739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.852044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.852341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.852375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.852400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.856606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.865771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.866515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.866572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.866605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.866916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.867215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.867249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.867275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.871479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.880444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.880940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.880981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.881008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.881303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.881595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.881627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.881650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.885874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.895055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.895559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.895600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.895628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.895930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.896230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.896263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.896286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.900506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.909602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.910069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.910111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.910137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.910424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.910711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.910744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.910766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.914921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.924215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.924705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.924746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.924773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.925073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.925361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.925393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.925416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.929554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.938790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.939295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.939336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.939362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.939647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.939950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.939983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.940006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.944131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.953412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.953853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.953903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.953933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.954225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.954518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.954551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.954574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.958749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.968161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.968694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.968738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.968765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.969071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.969365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.969398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.969421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.973604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.982853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.983370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.867 [2024-07-15 08:04:03.983420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.867 [2024-07-15 08:04:03.983447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.867 [2024-07-15 08:04:03.983734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.867 [2024-07-15 08:04:03.984038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.867 [2024-07-15 08:04:03.984072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.867 [2024-07-15 08:04:03.984095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.867 [2024-07-15 08:04:03.988364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.867 [2024-07-15 08:04:03.997574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.867 [2024-07-15 08:04:03.998038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:03.998080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:03.998113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:03.998403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:03.998695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:03.998728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:03.998751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.002957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.868 [2024-07-15 08:04:04.012108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.868 [2024-07-15 08:04:04.012603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:04.012651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:04.012678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:04.012987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:04.013289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:04.013321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:04.013345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.017531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.868 [2024-07-15 08:04:04.026726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.868 [2024-07-15 08:04:04.027178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:04.027220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:04.027246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:04.027533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:04.027821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:04.027853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:04.027889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.032063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.868 [2024-07-15 08:04:04.041352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.868 [2024-07-15 08:04:04.041808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:04.041849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:04.041915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:04.042206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:04.042499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:04.042531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:04.042554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.046695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.868 [2024-07-15 08:04:04.055838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.868 [2024-07-15 08:04:04.056342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:04.056384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:04.056410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:04.056693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:04.056992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:04.057025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:04.057047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.061198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.868 [2024-07-15 08:04:04.070497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.868 [2024-07-15 08:04:04.070969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:04.071012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:04.071039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:04.071324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:04.071615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:04.071647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:04.071670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.075801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.868 [2024-07-15 08:04:04.085078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.868 [2024-07-15 08:04:04.085572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.868 [2024-07-15 08:04:04.085612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.868 [2024-07-15 08:04:04.085638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.868 [2024-07-15 08:04:04.085935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.868 [2024-07-15 08:04:04.086231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.868 [2024-07-15 08:04:04.086263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.868 [2024-07-15 08:04:04.086285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.868 [2024-07-15 08:04:04.090385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.126 [2024-07-15 08:04:04.099240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.126 [2024-07-15 08:04:04.099696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.126 [2024-07-15 08:04:04.099733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.126 [2024-07-15 08:04:04.099758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.126 [2024-07-15 08:04:04.100034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.126 [2024-07-15 08:04:04.100297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.126 [2024-07-15 08:04:04.100326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.126 [2024-07-15 08:04:04.100351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.126 [2024-07-15 08:04:04.104178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.126 [2024-07-15 08:04:04.113281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.126 [2024-07-15 08:04:04.113789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.126 [2024-07-15 08:04:04.113825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.126 [2024-07-15 08:04:04.113848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.126 [2024-07-15 08:04:04.114126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.126 [2024-07-15 08:04:04.114394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.126 [2024-07-15 08:04:04.114423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.126 [2024-07-15 08:04:04.114443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.126 [2024-07-15 08:04:04.118238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.126 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.126 [2024-07-15 08:04:04.127516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.126 [2024-07-15 08:04:04.127526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.126 [2024-07-15 08:04:04.128021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.126 [2024-07-15 08:04:04.128059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.126 [2024-07-15 08:04:04.128083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.126 [2024-07-15 08:04:04.128379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.126 [2024-07-15 08:04:04.128625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.126 [2024-07-15 08:04:04.128663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.126 [2024-07-15 08:04:04.128682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.126 [2024-07-15 08:04:04.132418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.126 [2024-07-15 08:04:04.141601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.142054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.142092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.142115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.142394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.142630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.142656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.142675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 [2024-07-15 08:04:04.146434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.127 [2024-07-15 08:04:04.155958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.127 [2024-07-15 08:04:04.156398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.156445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.156470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.156741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.157026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.157057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.157078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 [2024-07-15 08:04:04.160840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 [2024-07-15 08:04:04.170284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.170938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.170989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.171018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.171315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.171576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.171606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.171629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 [2024-07-15 08:04:04.175458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 [2024-07-15 08:04:04.184466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.185004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.185047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.185074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.185366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.185618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.185647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.185668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 [2024-07-15 08:04:04.189475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 [2024-07-15 08:04:04.198687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.199188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.199236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.199261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.199550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.199799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.199827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.199846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 [2024-07-15 08:04:04.203688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 [2024-07-15 08:04:04.212703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.213199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.213246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.213271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.213557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.213803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.213830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.213855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 [2024-07-15 08:04:04.217577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 [2024-07-15 08:04:04.226770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.227269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.127 [2024-07-15 08:04:04.227316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.127 [2024-07-15 08:04:04.227340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.127 [2024-07-15 08:04:04.227624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.127 [2024-07-15 08:04:04.227896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.127 [2024-07-15 08:04:04.227925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.127 [2024-07-15 08:04:04.227945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.127 Malloc0 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.127 [2024-07-15 08:04:04.231716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.127 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.127 [2024-07-15 08:04:04.241084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.127 [2024-07-15 08:04:04.241626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.128 [2024-07-15 08:04:04.241674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.128 [2024-07-15 08:04:04.241698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:13.128 [2024-07-15 08:04:04.241976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.128 [2024-07-15 08:04:04.242266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:13.128 [2024-07-15 08:04:04.242294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:13.128 [2024-07-15 08:04:04.242314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.128 [2024-07-15 08:04:04.246076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:13.128 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.128 08:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.128 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.128 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.128 [2024-07-15 08:04:04.250127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.128 08:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.128 08:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1248559 00:37:13.128 [2024-07-15 08:04:04.255256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.128 [2024-07-15 08:04:04.298288] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:23.098 00:37:23.098 Latency(us) 00:37:23.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:23.098 Verification LBA range: start 0x0 length 0x4000 00:37:23.098 Nvme1n1 : 15.02 4198.73 16.40 8899.10 0.00 9742.11 3325.35 36894.34 00:37:23.098 =================================================================================================================== 00:37:23.098 Total : 4198.73 16.40 8899.10 0.00 9742.11 3325.35 36894.34 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:23.098 rmmod nvme_tcp 00:37:23.098 rmmod nvme_fabrics 00:37:23.098 rmmod nvme_keyring 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1249341 ']' 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1249341 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1249341 ']' 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1249341 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1249341 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1249341' 00:37:23.098 killing process with pid 1249341 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1249341 00:37:23.098 08:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1249341 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:24.476 08:04:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.383 08:04:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:26.383 00:37:26.383 real 0m26.676s 00:37:26.383 user 1m11.763s 00:37:26.383 sys 0m5.299s 00:37:26.383 08:04:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:26.383 08:04:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:26.383 ************************************ 00:37:26.383 END TEST nvmf_bdevperf 00:37:26.383 ************************************ 00:37:26.383 08:04:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:26.383 08:04:17 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:26.383 08:04:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:26.383 08:04:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:26.383 08:04:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.383 ************************************ 00:37:26.383 START TEST nvmf_target_disconnect 00:37:26.383 ************************************ 00:37:26.383 08:04:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:26.642 * Looking for test storage... 00:37:26.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.642 08:04:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:26.643 08:04:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:28.547 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:28.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:28.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:28.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:28.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:28.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:28.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:37:28.548 00:37:28.548 --- 10.0.0.2 ping statistics --- 00:37:28.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.548 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:28.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:28.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:37:28.548 00:37:28.548 --- 10.0.0.1 ping statistics --- 00:37:28.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.548 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.548 08:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.807 ************************************ 00:37:28.807 START TEST nvmf_target_disconnect_tc1 00:37:28.807 ************************************ 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:28.807 08:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.807 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.807 [2024-07-15 08:04:19.972160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.807 [2024-07-15 08:04:19.972280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:28.807 [2024-07-15 08:04:19.972370] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:28.807 [2024-07-15 08:04:19.972406] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:28.807 [2024-07-15 08:04:19.972432] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:28.807 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:28.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:28.807 Initializing NVMe Controllers 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.807 00:37:28.807 real 0m0.215s 00:37:28.807 user 0m0.085s 00:37:28.807 sys 0m0.130s 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:28.807 ************************************ 00:37:28.807 END TEST nvmf_target_disconnect_tc1 00:37:28.807 ************************************ 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.807 08:04:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:29.064 ************************************ 00:37:29.064 START TEST nvmf_target_disconnect_tc2 00:37:29.064 ************************************ 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1252713 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1252713 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1252713 ']' 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:29.064 08:04:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.064 [2024-07-15 08:04:20.145635] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:29.064 [2024-07-15 08:04:20.145775] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.064 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.064 [2024-07-15 08:04:20.277937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:29.350 [2024-07-15 08:04:20.495463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.350 [2024-07-15 08:04:20.495540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.350 [2024-07-15 08:04:20.495571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.350 [2024-07-15 08:04:20.495587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.350 [2024-07-15 08:04:20.495604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.350 [2024-07-15 08:04:20.495712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:29.350 [2024-07-15 08:04:20.495834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:29.350 [2024-07-15 08:04:20.495927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:29.350 [2024-07-15 08:04:20.495949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.920 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.179 Malloc0 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.179 [2024-07-15 08:04:21.156016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.179 [2024-07-15 08:04:21.185281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1252866 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:30.179 08:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:30.179 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.088 08:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1252713 00:37:32.088 08:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 [2024-07-15 08:04:23.223698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 [2024-07-15 08:04:23.224367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 [2024-07-15 08:04:23.224949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Write completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 Read completed with error (sct=0, sc=8) 00:37:32.088 starting I/O failed 00:37:32.088 [2024-07-15 08:04:23.225545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:32.088 [2024-07-15 08:04:23.225832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.225895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.226062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.226098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.226385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.226419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.226636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.226685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.226866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.226908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.227076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.227111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.227275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.227309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.227492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.227527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.227818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.227875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.228064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.228099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.228235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.228270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.228489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.228522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.228723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.228762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.228930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.228965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.229141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.229189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.229397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.229434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.229673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.229708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.229914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.229949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.230117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.230165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.230317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.088 [2024-07-15 08:04:23.230351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.088 qpair failed and we were unable to recover it. 00:37:32.088 [2024-07-15 08:04:23.230586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.230624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.230900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.230934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.231066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.231100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.231327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.231360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.231520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.231569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.231772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.231805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.231977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.232010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.232171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.232204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.232372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.232406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.232614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.232651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.232808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.232841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.233033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.233066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.233203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.233236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.233432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.233484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.233651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.233688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.233851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.233893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.234036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.234070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.234255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.234326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.234686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.234749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.235014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.235049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.235220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.235254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.235418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.235451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.235614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.235665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.235883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.235919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.236075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.236109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.236265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.236299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.236472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.236505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.236694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.236727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.236984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.237018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.237164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.237198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.237364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.237416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.237623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.237656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.237789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.237845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.238079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.238115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.238314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.238362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.238555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.238606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.238822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.238857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.239003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.239038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.239174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.239207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.239414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.239447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.239590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.239625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.239811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.089 [2024-07-15 08:04:23.239844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.089 qpair failed and we were unable to recover it. 00:37:32.089 [2024-07-15 08:04:23.239990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.240024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.240195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.240243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.240453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.240488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.240655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.240689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.240863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.240929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.241069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.241103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.241283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.241316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.241595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.241658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.241875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.241913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.242098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.242148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.242403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.242441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.242611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.242647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.242870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.242919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.243111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.243161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.243345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.243397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.243610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.243659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.243808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.243843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.244097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.244146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.244302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.244339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.244509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.244544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.244808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.244860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.245081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.245116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.245262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.245297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.245432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.245467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.245605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.245636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.245797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.245831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.246011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.246045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.246187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.246221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.246392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.246427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.246606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.246643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.246843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.247077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.247126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.247308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.247346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.247513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.247548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.247687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.247723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.247893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.247929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.248084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.248120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.248309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.248344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.248525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.248563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.248723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.248764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.090 [2024-07-15 08:04:23.248952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.090 [2024-07-15 08:04:23.249013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.090 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.249226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.249264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.249456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.249491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.249639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.249688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.249869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.249931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.250099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.250135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.250297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.250331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.250489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.250524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.250719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.250760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.250947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.250997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.251200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.251238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.251412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.251448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.251653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.251688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.251864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.251920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.252054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.252090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.252292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.252327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.252493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.252528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.252704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.252739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.252923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.252973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.253194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.253243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.253461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.253516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.253816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.253887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.254054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.254089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.254251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.254284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.254458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.254494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.254663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.254698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.254839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.254872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.255028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.255078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.255279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.255334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.255548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.255587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.255772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.255811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.256015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.256051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.256207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.256256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.256431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.091 [2024-07-15 08:04:23.256469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.091 qpair failed and we were unable to recover it. 00:37:32.091 [2024-07-15 08:04:23.256664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.256700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.256865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.256915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.257112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.257148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.257319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.257353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.257567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.257627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.257831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.257869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.258056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.258090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.258278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.258611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.258674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.258870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.258913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.259078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.259113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.259280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.259315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.259444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.259498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.259839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.259905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.260089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.260124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.260296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.260332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.260525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.260560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.260701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.260755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.260998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.261048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.261297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.261352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.261644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.261704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.261891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.261943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.262111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.262145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.262286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.262326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.262485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.262757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.262791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.262929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.262962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.263128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.263162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.263305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.263338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.263559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.263593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.263796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.263833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.264057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.264091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.264250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.264284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.264542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.264601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.264789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.264822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.264973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.265007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.265169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.265203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.265368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.265420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.265593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.265628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.265832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.265893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.266071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.266126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.266339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.266374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.266584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.266620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.266780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.266833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.267038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.267074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.267334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.267394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.267717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.267776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.267961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.268008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.268171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.268205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.268399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.268433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.268578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.268612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.268806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.268857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.269019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.269057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.269221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.269254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.269417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.269451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.269707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.269744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.269899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.269932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.270098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.270353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.270412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.270602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.270636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.270824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.270858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.271033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.271086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.271236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.271270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.271467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.271506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.271641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.271675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.271809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.271844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.271987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.092 [2024-07-15 08:04:23.272019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.092 qpair failed and we were unable to recover it. 00:37:32.092 [2024-07-15 08:04:23.272174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.272208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.272373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.272408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.272611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.272646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.272810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.272844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.273009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.273043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.273194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.273232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.273409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.273446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.273623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.273657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.273811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.273848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.274035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.274073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.274263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.274297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.274552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.274609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.274777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.274814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.275029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.275064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.275209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.275243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.275407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.275441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.275633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.275667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.275883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.275921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.276142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.276179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.276394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.276429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.276748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.276807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.277000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.277036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.277185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.277219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.277359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.277393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.277522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.277553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.277717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.277752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.277947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.277981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.278170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.278204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.278363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.278397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.278583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.278621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.278833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.278867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.279028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.279062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.279227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.279261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.279427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.279461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.279629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.279663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.279818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.279852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.280042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.280080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.280267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.280302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.280467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.280519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.280729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.280763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.280927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.280962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.281115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.281154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.281316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.281355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.281565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.281599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.281776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.281827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.282016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.282050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.282210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.282244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.282434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.282468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.282674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.282708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.282868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.282907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.283094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.283143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.283328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.283366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.283522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.283573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.283764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.283800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.283976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.284026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.284212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.284249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.284546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.284605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.284940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.284979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.285191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.285225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.285355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.285389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.285672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.285707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.285906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.285941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.286119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.286174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.286353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.286409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.286636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.286675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.286866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.286930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.287080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.287120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.287306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.287342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.287514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.287550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.287720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.287772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.287964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.288000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.288180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.288230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.288430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.288465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.288640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.288675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.288870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.288913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.289077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.289111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.289270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.289311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.289497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.093 [2024-07-15 08:04:23.289533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.093 qpair failed and we were unable to recover it. 00:37:32.093 [2024-07-15 08:04:23.289692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.289726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.289898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.289933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.290105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.290143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.290364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.290424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.290619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.290653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.290811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.290845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.291041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.291075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.291244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.291278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.291556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.291614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.291820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.291858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.292052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.292086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.292230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.292263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.292400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.292434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.292602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.292636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.292861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.292927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.293137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.293186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.293392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.293429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.293631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.293666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.293873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.293923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.294084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.294119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.294284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.294318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.294484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.294517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.294680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.294714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.294882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.294918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.295111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.295402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.295439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.295678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.295713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.295851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.295896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.296048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.296083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.296249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.296285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.296568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.296626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.296820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.296855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.297059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.297095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.297335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.297371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.297562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.297597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.297790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.297829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.298032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.298067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.298228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.298264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.298477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.298516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.298686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.298721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.298895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.298931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.299117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.299156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.299359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.299397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.299613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.299649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.299871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.299914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.300078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.300113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.300279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.300314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.300476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.300512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.300718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.300774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.300997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.301033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.301240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.301278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.301500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.301562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.301749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.301784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.301999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.302035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.302174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.302220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.302383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.302418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.302546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.302596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.302786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.302822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.303025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.303061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.303249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.303283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.303498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.303559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.303751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.303784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.303970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.304005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.304184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.304221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.304430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.304463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.304663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.304700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.304845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.304892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.305076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.305112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.305268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.305303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.305442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.305475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.305639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.305673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.305804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.094 [2024-07-15 08:04:23.305858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.094 qpair failed and we were unable to recover it. 00:37:32.094 [2024-07-15 08:04:23.306078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.306113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.306281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.306316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.306486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.306538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.306749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.306782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.306942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.306976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.307140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.307172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.307438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.307477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.307622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.307655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.307835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.307870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.308118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.308153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.308314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.308348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.308533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.308567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.308730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.308763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.308905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.308937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.309077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.309110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.309275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.309306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.309476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.309508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.309670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.309706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.309891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.309929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.310091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.310125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.310294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.310347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.310503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.310540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.310720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.310754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.095 [2024-07-15 08:04:23.310936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.095 [2024-07-15 08:04:23.310972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.095 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.311162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.311196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.311397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.311430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.311615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.311647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.311809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.311840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.312012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.312044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.312230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.312265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.312539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.312599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.312789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.312821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.312990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.313026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.313203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.313241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.313401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.313434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.313592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.313629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.313804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.313843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.314007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.314042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.314214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.314252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.314452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.314490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.314708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.314741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.314923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.314958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.315162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.315199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.315351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.315385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.315546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.315580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.315780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.315819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.316008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.316078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.316269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.316307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.316536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.316596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.316781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.316815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.317001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.317038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.317222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.317255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.317391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.317425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.317567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.317602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.317788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.317824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.318047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.318080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.318314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.318347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.318601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.318634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.318767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.318800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.368 qpair failed and we were unable to recover it. 00:37:32.368 [2024-07-15 08:04:23.318955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.368 [2024-07-15 08:04:23.318990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.319202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.319240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.319462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.319496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.319683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.319721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.319924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.319962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.320149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.320183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.320335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.320373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.320535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.320574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.320755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.320793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.320973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.321008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.321194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.321232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.321415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.321448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.321630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.321668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.321893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.321928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.322106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.322141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.322329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.322366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.322557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.322591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.322758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.322791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.322997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.323034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.323235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.323272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.323440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.323474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.323660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.323694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.323893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.323929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.324116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.324150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.324360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.324398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.324565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.324601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.324795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.324829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.325027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.325067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.325252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.325289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.325482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.325516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.325707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.325741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.325945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.325983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.326173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.326207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.326391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.326426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.326614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.326650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.326835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.326866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.327039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.327070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.327246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.327284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.327485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.327519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.327657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.327709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.327861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.327920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.328105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.328138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.328304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.328338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.328541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.328579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.328785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.328836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.328985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.329018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.329215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.329249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.329445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.329478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.329657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.329694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.329866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.329914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.330082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.330116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.330325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.330377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.330540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.330573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.330740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.330774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.330963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.331001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.331202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.331239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.331410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.331444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.331616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.331653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.331833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.331870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.332059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.332093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.332298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.332335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.332511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.332548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.332776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.332814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.332974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.333009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.333168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.333202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.333364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.333398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.333572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.333610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.333790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.333833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.334050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.334084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.334282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.334316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.334451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.334485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.334660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.334694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.334858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.334901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.335058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.335097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.335287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.335322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.335511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.335564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.335707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.335745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.335929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.369 [2024-07-15 08:04:23.335963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.369 qpair failed and we were unable to recover it. 00:37:32.369 [2024-07-15 08:04:23.336148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.336185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.336358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.336397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.336603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.336637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.336800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.336837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.337069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.337104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.337236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.337269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.337452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.337489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.337673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.337707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.337840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.337875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.338022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.338056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.338229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.338263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.338492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.338526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.338750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.338788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.338980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.339014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.339209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.339244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.339426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.339464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.339667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.339702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.339861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.339904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.340062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.340097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.340232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.340267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.340413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.340447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.340611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.340663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.340807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.340844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.341074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.341109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.341325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.341369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.341548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.341586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.341771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.341806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.341942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.341975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.342120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.342171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.342361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.342399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.342592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.342631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.342800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.342838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.343007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.343041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.343208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.343242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.343401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.343436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.343621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.343655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.343848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.343921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.344089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.344133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.344293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.344327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.344466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.344499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.344669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.344703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.344841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.344873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.345070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.345104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.345276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.345309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.345468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.345501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.345676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.345714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.345896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.345935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.346090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.346123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.346268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.346321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.346517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.346552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.346706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.346741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.346888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.346922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.347135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.347173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.347386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.347419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.347619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.347653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.347858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.347914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.348114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.348148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.348304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.348341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.348517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.348555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.348741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.348776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.348967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.349004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.349167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.349201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.349363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.349397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.349561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.349596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.349823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.349862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.350058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.350092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.350264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.350302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.350481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.350519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.370 [2024-07-15 08:04:23.350754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.370 [2024-07-15 08:04:23.350792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.370 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.351003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.351043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.351222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.351258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.351406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.351440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.351623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.351660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.351841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.351885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.352090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.352124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.352262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.352314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.352511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.352545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.352729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.352763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.352976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.353022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.353162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.353197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.353374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.353408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.353620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.353659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.353802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.353840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.354060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.354095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.354262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.354296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.354503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.354540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.354722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.354756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.354940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.354978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.355189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.355224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.355392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.355425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.355587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.355621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.355841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.355899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.356115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.356148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.356339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.356373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.356537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.356589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.356741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.356774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.356937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.356976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.357150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.357186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.357392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.357425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.357585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.357620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.357789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.357823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.357995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.358038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.358191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.358227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.358452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.358485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.358649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.358684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.358833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.358867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.359061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.359098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.359273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.359306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.359455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.359489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.359650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.359689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.359854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.359899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.360045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.360077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.360222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.360254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.360417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.360450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.360626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.360661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.360863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.360911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.361099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.361133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.361316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.361352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.361539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.361573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.361770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.361807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.362041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.362075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.362261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.362300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.362488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.362522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.362683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.362716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.362900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.362937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.363152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.363184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.363385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.363419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.363554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.363587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.363764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.363800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.364020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.364057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.364247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.364285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.364472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.364505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.364670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.364721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.364914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.364955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.365144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.365178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.365334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.365371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.365557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.365594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.366170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.366215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.366432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.366465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.366651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.366684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.366856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.366900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.367071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.367105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.367303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.367351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.367539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.367571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.367742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.367774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.367939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.371 [2024-07-15 08:04:23.367992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.371 qpair failed and we were unable to recover it. 00:37:32.371 [2024-07-15 08:04:23.368213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.368246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.368377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.368410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.368549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.368582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.368725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.368757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.368953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.369152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.369207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.369352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.369385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.369597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.369633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.369814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.369851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.370085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.370118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.370282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.370318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.370508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.370542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.370707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.370741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.370934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.370972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.371155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.371191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.371361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.371393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.371586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.371620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.371798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.371832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.372046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.372096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.372310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.372347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.372553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.372588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.372755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.372790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.372961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.372997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.373197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.373231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.373438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.373490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.373694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.373729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.373895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.373941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.374200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.374250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.374422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.374469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.374657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.374696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.374910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.374954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.375101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.375134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.375301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.375334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.375527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.375576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.375728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.375767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.375970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.376007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.376174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.376208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.376391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.376424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.376625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.376661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.376852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.376901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.377096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.377135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.377332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.377370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.377640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.377697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.377893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.378098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.378133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.378306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.378340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.378472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.378525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.378741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.378778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.378985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.379020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.379197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.379234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.379437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.379473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.379658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.379694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.379873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.379939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.380126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.380177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.380483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.380553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.380758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.380791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.380934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.380968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.381187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.381238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.381407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.381444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.381651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.381702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.381869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.381910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.382079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.382114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.382278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.382311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.382472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.382506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.382669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.382703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.382896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.382941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.383124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.383181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.383379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.383416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.383632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.383670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.383856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.383899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.384069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.384108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.384308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.384345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.384731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.384790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.385003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.385037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.385251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.385299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.385626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.385684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.385888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.385922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.372 [2024-07-15 08:04:23.386090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.372 [2024-07-15 08:04:23.386124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.372 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.386358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.386392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.386580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.386617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.386775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.386811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.387029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.387062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.387261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.387295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.387502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.387539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.387739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.387775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.387961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.387996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.388130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.388164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.388348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.388381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.388569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.388607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.388793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.388831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.389011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.389044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.389231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.389268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.389483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.389520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.389731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.389768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.389986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.390021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.390176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.390209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.390410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.390447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.390619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.390655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.390829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.390886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.391094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.391126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.391398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.391439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.391628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.391689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.391902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.391936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.392100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.392132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.392274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.392307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.392538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.392603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.392787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.392824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.393020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.393053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.393206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.393240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.393428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.393461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.393632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.393676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.393892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.393928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.394077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.394112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.394284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.394337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.394542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.394593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.394769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.394825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.395058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.395092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.395256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.395289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.395473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.395510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.395686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.395723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.395913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.395948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.396083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.396116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.396344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.396377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.396563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.396599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.396761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.396798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.396981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.397016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.397203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.397253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.397440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.397489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.397654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.397695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.397896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.397931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.398091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.398125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.398286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.398323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.398658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.398716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.398931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.398965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.399130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.399168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.399309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.399342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.399668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.399706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.399966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.400000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.400140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.400184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.400380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.400414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.400563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.400615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.400817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.400854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.401057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.401090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.401250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.401287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.401470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.401507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.401710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.401760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.401906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.401947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.402110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.402143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.402353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.373 [2024-07-15 08:04:23.402386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.373 qpair failed and we were unable to recover it. 00:37:32.373 [2024-07-15 08:04:23.402546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.402582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.402787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.402829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.403056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.403089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.403288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.403324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.403638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.403688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.403886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.403920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.404060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.404093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.404287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.404324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.404525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.404561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.404748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.404793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.404957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.404991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.405150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.405193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.405334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.405369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.405519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.405569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.405793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.405826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.406011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.406045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.406253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.406289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.406505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.406542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.406703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.406752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.406962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.406995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.407174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.407207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.407421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.407457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.407621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.407658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.407849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.407894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.408033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.408066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.408303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.408367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.408528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.408561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.408753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.408802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.408977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.409011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.409144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.409188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.409385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.409422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.409635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.409676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.409830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.409868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.410002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.410036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.410192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.410229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.410472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.410509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.410689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.410727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.410907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.410958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.411085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.411117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.411338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.411375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.411587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.411624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.411812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.411850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.412056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.412090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.412307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.412344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.412669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.412729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.412932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.412966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.413120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.413172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.413394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.413428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.413616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.413653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.413815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.413857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.414026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.414059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.414247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.414285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.414626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.414695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.414901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.414935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.415067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.415100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.415312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.415349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.415604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.415668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.415896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.415951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.416090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.416123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.416334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.416367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.416564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.416614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.416834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.416871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.417075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.417108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.417304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.417342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.417506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.417544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.417750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.417794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.418009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.418047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.418264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.418302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.418492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.418532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.418715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.418752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.418906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.418947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.419132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.419314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.419352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.419494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.419543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.419695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.419729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.419904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.419941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.420103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.420136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.420359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.374 [2024-07-15 08:04:23.420393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.374 qpair failed and we were unable to recover it. 00:37:32.374 [2024-07-15 08:04:23.420621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.420654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.420825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.420857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.421081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.421115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.421305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.421347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.421522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.421560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.421747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.421781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.421987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.422024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.422183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.422220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.422405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.422446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.422637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.422674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.422871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.422913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.423076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.423109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.423343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.423379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.423604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.423651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.423826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.423859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.424047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.424083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.424293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.424330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.424515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.424548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.424746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.424783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.424961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.424999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.425205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.425238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.425451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.425489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.425677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.425714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.425962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.425996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.426135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.426197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.426383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.426420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.426621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.426655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.426841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.426887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.427078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.427111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.427308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.427342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.427526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.427563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.427772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.427808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.428008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.428042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.428189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.428242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.428465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.428501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.428677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.428720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.428901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.428945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.429139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.429181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.429372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.429405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.429578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.429611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.429769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.429806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.429989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.430022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.430206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.430251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.430467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.430509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.430689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.430722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.430921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.430958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.431127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.431164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.431384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.431417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.431602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.431639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.431822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.431859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.432078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.432111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.432350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.432387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.432558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.432596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.432780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.432814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.433012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.433050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.433210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.433247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.433422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.433455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.433670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.433722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.433939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.433976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.434128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.434172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.434355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.434402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.434567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.434604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.434819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.434852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.435027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.435064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.435247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.435283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.435468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.435502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.435653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.435694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.435861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.435933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.436072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.436106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.436339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.436384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.436576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.436613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.436821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.436854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.375 qpair failed and we were unable to recover it. 00:37:32.375 [2024-07-15 08:04:23.437038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.375 [2024-07-15 08:04:23.437075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.437287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.437330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.437530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.437563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.437763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.437799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.437945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.437983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.438190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.438224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.438422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.438458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.438669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.438709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.438935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.438969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.439134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.439384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.439421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.439577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.439619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.439827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.439862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.440033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.440070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.440255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.440292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.440488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.440525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.440712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.440756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.440973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.441011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.441217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.441256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.441573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.441612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.441831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.441865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.442105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.442142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.442330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.442367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.442646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.442710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.442939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.442972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.443149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.443196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.443370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.443407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.443654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.443709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.443874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.443924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.444109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.444146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.444305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.444342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.444527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.444564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.444780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.444820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.445036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.445073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.445222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.445267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.445522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.445595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.445786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.445820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.446022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.446060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.446216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.446266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.446603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.446640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.446847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.446888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.447097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.447131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.447363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.447400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.447661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.447700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.447906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.447940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.448104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.448141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.448359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.448396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.448697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.448943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.448977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.449194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.449231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.449429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.449476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.449730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.449792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.449984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.450018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.450201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.450239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.450432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.450468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.450663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.450700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.450894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.450927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.451096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.451129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.451316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.451353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.451561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.451602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.451826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.451859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.452089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.452126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.452310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.452347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.452581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.452647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.452838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.452884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.453106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.453143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.453334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.453371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.453557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.453593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.453814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.453847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.454090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.454128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.454318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.454355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.454648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.454709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.454897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.454943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.455158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.455202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.455372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.455409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.455603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.455641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.455868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.455918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.456133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.456175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.376 qpair failed and we were unable to recover it. 00:37:32.376 [2024-07-15 08:04:23.456357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.376 [2024-07-15 08:04:23.456394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.456570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.456606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.456793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.456826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.457062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.457101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.457274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.457311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.457481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.457518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.457683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.457715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.457883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.457917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.458074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.458107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.458386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.458449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.458607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.458640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.458809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.458842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.459022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.459056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.459209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.459253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.459436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.459475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.459673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.459710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.459898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.459936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.460088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.460124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.460339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.460372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.460597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.460635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.460822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.460854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.461035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.461068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.461224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.461265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.461451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.461488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.461726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.461762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.461992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.462026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.462195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.462228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.462424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.462462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.462677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.462710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.462873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.462915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.463119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.463152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.463356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.463393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.463595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.463632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.463844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.463894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.464111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.464144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.464360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.464397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.464580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.464625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.464806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.464854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.465051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.465085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.465283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.465320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.465502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.465558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.465837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.465888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.466114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.466149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.466350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.466389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.466671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.466729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.466911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.466976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.467141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.467175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.467392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.467436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.467768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.467827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.468053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.468088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.468257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.468291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.468486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.468523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.468823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.468887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.469086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.469125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.469284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.469318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.469505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.469542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.469729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.469766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.469940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.469990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.470136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.470178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.470377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.470419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.470625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.470662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.470836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.470885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.471078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.471112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.471272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.471304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.471481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.471513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.471704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.471746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.471950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.471985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.472197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.472233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.472424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.472462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.472656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.472693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.472889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.472932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.473095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.377 [2024-07-15 08:04:23.473127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.377 qpair failed and we were unable to recover it. 00:37:32.377 [2024-07-15 08:04:23.473342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.473374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.473601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.473637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.473836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.473874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.474070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.474103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.475089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.475127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.475397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.475448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.475713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.475748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.475955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.476010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.476232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.476271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.476626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.476706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.476928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.476962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.477132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.477189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.477366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.477404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.477635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.477672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.477903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.477956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.478121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.478176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.478391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.478426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.478571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.478605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.478810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.478847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.479066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.479099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.479284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.479320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.479510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.479547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.479745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.479783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.479961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.480004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.480212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.480261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.480454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.480487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.480691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.480735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.480910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.480949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.481094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.481131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.481304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.481341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.481557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.481594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.481830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.481868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.482112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.482152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.482339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.482372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.482515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.482549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.482781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.482820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.483013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.483050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.483187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.483217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.483417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.483455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.483691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.483728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.483908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.483948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.484110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.484144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.484331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.484367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.484576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.484610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.484803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.484836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.485024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.485057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.485230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.485285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.485508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.485572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.485751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.485784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.485986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.486020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.486187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.486255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.486538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.486575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.486784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.486818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.486991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.487025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.487213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.487254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.487597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.487635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.487848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.487890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.488051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.488085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.488356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.488405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.488661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.488717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.488925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.488960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.489176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.489214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.489579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.489649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.489852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.489892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.490043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.490076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.490274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.490324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.490572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.490628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.490797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.490832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.491018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.491052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.491213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.491251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.491465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.491503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.491707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.491757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.491950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.491985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.492199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.492237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.492475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.492737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.492769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.492939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.492974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.493202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.378 [2024-07-15 08:04:23.493250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.378 qpair failed and we were unable to recover it. 00:37:32.378 [2024-07-15 08:04:23.493466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.493513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.493684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.493718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.493885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.493920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.494072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.494123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.494359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.494397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.494620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.494657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.494810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.494847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.495062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.495095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.495291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.495328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.495523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.495560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.495729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.495771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.495943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.495976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.496146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.496185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.496493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.496549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.496733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.496769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.496986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.497020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.497189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.497222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.497502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.497549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.497743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.497777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.497925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.497969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.498177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.498229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.498532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.498588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.498777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.498809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.498989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.499023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.499203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.499252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.499593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.499653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.499865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.499923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.500068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.500256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.500290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.500452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.500484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.500672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.500709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.500891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.500930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.501094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.501128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.501373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.501409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.501583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.501620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.501783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.501819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.501998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.502032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.502196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.502234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.502482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.502549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.502755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.502792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.502979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.503026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.503226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.503268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.503484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.503538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.503699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.503735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.503911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.503963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.504118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.504151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.504378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.504420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.504602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.504640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.504832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.504870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.505097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.505130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.505466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.505509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.505725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.505762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.505966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.506000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.506170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.506204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.506509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.506574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.506748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.506785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.506992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.507026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.507181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.507230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.507480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.507546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.507723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.507760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.507997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.508031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.508189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.508223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.508487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.508532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.508708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.508744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.508941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.508975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.509122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.509167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.509479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.509518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.509720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.509764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.509940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.509973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.510106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.510157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.510310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.510346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.379 [2024-07-15 08:04:23.510501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.379 [2024-07-15 08:04:23.510539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.379 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.510749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.510787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.510947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.510981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.511144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.511197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.511349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.511385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.511590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.511626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.511814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.511850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.512052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.512085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.512228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.512261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.512464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.512525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.512728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.512764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.512960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.512997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.513180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.513217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.513467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.513527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.513761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.513797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.513966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.513999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.514144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.514188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.514376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.514655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.514691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.514860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.514926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.515101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.515135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.515357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.515394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.515573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.515610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.515797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.515834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.516021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.516066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.516267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.516304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.516553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.516589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.516797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.516833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.517036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.517069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.517312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.517353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.517590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.517626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.517820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.517857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.518048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.518080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.518248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.518284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.518531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.518568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.518755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.518791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.518961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.518993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.519140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.519199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.519382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.519416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.519597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.519635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.519839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.519889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.520064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.520097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.520252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.520298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.520496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.520533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.520708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.520744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.520943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.520976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.521119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.521152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.521342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.521386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.521569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.521605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.521805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.521842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.522031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.522074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.522257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.522308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.522484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.522521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.522735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.522771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.522955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.522987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.523116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.523176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.523394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.523441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.523659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.523696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.523861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.523902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.524048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.524086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.524253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.524285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.524492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.524532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.524772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.524809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.525015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.525048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.525246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.525283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.525464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.525503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.525712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.525749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.525941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.525975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.526114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.526161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.526355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.526389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.526608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.526646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.526806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.526845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.527034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.527067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.527275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.527312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.527578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.527615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.527809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.527846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.528041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.528075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.528282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.528321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.380 qpair failed and we were unable to recover it. 00:37:32.380 [2024-07-15 08:04:23.528508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.380 [2024-07-15 08:04:23.528547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.528740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.528777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.528945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.528979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.529124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.529158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.529369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.529401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.529589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.529625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.529791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.529828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.530076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.530114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.530344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.530377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.530591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.530627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.530793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.530832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.531024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.531062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.531250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.531283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.531481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.531517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.531681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.531719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.531893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.531944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.532107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.532139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.532343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.532380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.532515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.532559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.532779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.532815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.533021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.533055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.533240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.533281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.533504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.533549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.533713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.533750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.533907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.533947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.534126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.534163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.534348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.534385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.534581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.534626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.534824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.534867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.535091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.535128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.535305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.535341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.535542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.535580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.535767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.535800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.535963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.536000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.536204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.536239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.536409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.536471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.536641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.536685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.536843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.536891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.537081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.537118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.537298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.537334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.537552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.537588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.537744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.537781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.537938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.537975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.538181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.538217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.538425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.538457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.538648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.538685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.538901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.538946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.539157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.539193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.539408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.539621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.539658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.539806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.539842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.540064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.540101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.540256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.540289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.540450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.540490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.540620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.540670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.540896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.540936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.541101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.541133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.541320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.541358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.541520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.541557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.541771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.541807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.541961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.541994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.542152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.542209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.542354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.542405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.542614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.542652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.542818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.542851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.543021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.543054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.543239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.543275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.543431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.543475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.543672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.543706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.543895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.543947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.544140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.544190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.544387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.544423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.544622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.544667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.544838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.544897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.545106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.545142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.545343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.545379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.545567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.381 [2024-07-15 08:04:23.545599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.381 qpair failed and we were unable to recover it. 00:37:32.381 [2024-07-15 08:04:23.545758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.545796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.546045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.546083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.546251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.546288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.546499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.546531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.546736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.546772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.546975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.547016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.547228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.547268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.547433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.547467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.547634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.547686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.547895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.547939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.548126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.548162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.548375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.548412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.548603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.548639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.548859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.548897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.549090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.549123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.549312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.549346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.549537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.549574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.549775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.549811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.550007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.550045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.550222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.550265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.550494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.550532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.550718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.550754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.550956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.550994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.551180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.551241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.551421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.551466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.551655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.551693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.551933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.551967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.552094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.552126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.552299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.552332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.552528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.552582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.552767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.552812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.552996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.553029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.553191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.553223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.553432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.553468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.553638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.553678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.553845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.553900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.554120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.554157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.554314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.554350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.554560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.554596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.554798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.554856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.555082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.555114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.555304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.555341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.555549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.555591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.555797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.555831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.556010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.556043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.556260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.556295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.556480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.556517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.556691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.556726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.556918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.556956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.557100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.557136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.557328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.557365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.557521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.557553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.557778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.557815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.557995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.558041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.558254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.558291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.558464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.558496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.558676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.558738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.558947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.558985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.559143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.559180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.559365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.559397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.559562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.559594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.559827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.559872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.560047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.560083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.560300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.560332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.560520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.560562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.560731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.560768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.560915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.560977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.561193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.561236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.561480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.561513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.561708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.561762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.561922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.561960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.562171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.382 [2024-07-15 08:04:23.562203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.382 qpair failed and we were unable to recover it. 00:37:32.382 [2024-07-15 08:04:23.562402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.562438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.562602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.562645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.562867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.562911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.563074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.563107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.563316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.563353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.563508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.563545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.563725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.563761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.563950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.563995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.564195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.564232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.564467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.564499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.564670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.564703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.564918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.564968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.565148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.565216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.565372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.565409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.565643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.565679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.565891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.565936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.566092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.566128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.566318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.566355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.566551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.566584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.566776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.566809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.567026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.567059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.567256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.567289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.567495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.567532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.567710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.567743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.567915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.567953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.568104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.568143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.568372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.568410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.568598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.568632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.568818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.568856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.569031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.569068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.569226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.569262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.569427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.569459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.569677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.569732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.569928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.569965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.570174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.570210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.570418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.570452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.570647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.570685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.570895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.570935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.571123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.571159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.571323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.571356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.571563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.571599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.571798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.571834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.572044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.572089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.572277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.572310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.572471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.572504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.572640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.572679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.572891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.572928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.573139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.573173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.573364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.573397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.573611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.573657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.573844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.573898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.574083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.574116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.574298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.574334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.574523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.574561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.574730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.574768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.574983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.575016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.575225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.575275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.575433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.575469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.575653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.575691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.575883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.575922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.576155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.576193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.576385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.576422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.576606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.576643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.576839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.576873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.577077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.577114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.577280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.577312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.577475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.577528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.577729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.577767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.578012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.578046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.578204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.578240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.578397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.578435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.578616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.578649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.578822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.578864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.579074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.579115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.579270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.579304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.579490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.579524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.579683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.579720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.579929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.383 [2024-07-15 08:04:23.579966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.383 qpair failed and we were unable to recover it. 00:37:32.383 [2024-07-15 08:04:23.580169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.580207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.580392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.580436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.580612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.580649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.580847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.580895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.581056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.581094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.581241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.581274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.581411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.581446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.384 [2024-07-15 08:04:23.581637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.384 [2024-07-15 08:04:23.581674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.384 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.581869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.581916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.582086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.582131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.582321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.582355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.582526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.582570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.582730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.582767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.582988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.583032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.583206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.583243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.583399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.583437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.583618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.583655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.583814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.583849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.584015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.584050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.584241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.584298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.584455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.584493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.584715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.584749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.584960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.584998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.585138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.585176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.585365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.585399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.585554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.585587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.585772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.585809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.585971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.586008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.586178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.586228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.586414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.586450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.586642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.586680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.586928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.586961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.587119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.587154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.587369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.587403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.587593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.587635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.587810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.588063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.588098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.588299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.588332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.661 [2024-07-15 08:04:23.588514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.661 [2024-07-15 08:04:23.588558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.661 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.588715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.588762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.588965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.588999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.589163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.589197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.589394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.589432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.589582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.589619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.589800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.589838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.590013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.590048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.590228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.590261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.590478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.590514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.590736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.590774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.590987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.591021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.591220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.591261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.591411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.591450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.591628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.591665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.591850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.591902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.592108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.592146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.592320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.592357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.592518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.592556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.592766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.592800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.592995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.593033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.593209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.593259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.593427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.593465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.593649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.593683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.593870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.593915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.594110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.594143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.594352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.594389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.594574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.594618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.594831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.594868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.595082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.595120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.595300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.595338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.595530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.595563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.595721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.595761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.595949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.595993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.596130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.596167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.596348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.596390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.596578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.596621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.596836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.596868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.597073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.597137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.597332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.597365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.597524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.662 [2024-07-15 08:04:23.597557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.662 qpair failed and we were unable to recover it. 00:37:32.662 [2024-07-15 08:04:23.597742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.597788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.598000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.598037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.598218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.598252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.598417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.598455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.598661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.598698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.598895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.598933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.599108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.599143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.599277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.599310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.599466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.599517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.599720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.599758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.599931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.599965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.600135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.600186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.600398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.600438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.600651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.600686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.600862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.600905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.601128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.601178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.601392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.601428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.601580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.601617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.601810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.601844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.602021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.602060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.602228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.602265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.602442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.602480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.602635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.602669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.602850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.602910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.603146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.603184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.603360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.603397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.603589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.603634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.603867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.603915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.604113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.604152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.604354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.604390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.604551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.604588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.604749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.604783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.604973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.605007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.605194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.605231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.605392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.605426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.605567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.605629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.605851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.605904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.606074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.606111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.606268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.606301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.606482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.606529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.663 qpair failed and we were unable to recover it. 00:37:32.663 [2024-07-15 08:04:23.606751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.663 [2024-07-15 08:04:23.606785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.606982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.607019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.607202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.607246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.607402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.607450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.607655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.607692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.607868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.607914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.608096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.608129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.608289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.608327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.608495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.608545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.608742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.608780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.608957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.609002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.609155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.609202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.609413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.609450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.609630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.609667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.609873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.609913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.610069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.610107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.610285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.610321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.610455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.610504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.610697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.610730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.610940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.610978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.611157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.611194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.611338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.611374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.611584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.611622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.611771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.611820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.611985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.612030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.612214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.612251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.612434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.612468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.612621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.612658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.612855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.612897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.613066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.613100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.613243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.613282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.613469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.613506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.613679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.613730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.613936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.613973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.614155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.614188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.614330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.614374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.614573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.614634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.614789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.614825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.614995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.615029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.615164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.615218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.615418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.664 [2024-07-15 08:04:23.615454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.664 qpair failed and we were unable to recover it. 00:37:32.664 [2024-07-15 08:04:23.615685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.615718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.615894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.615932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.616121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.616175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.616367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.616414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.616627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.616659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.616821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.616854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.617025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.617058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.617261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.617299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.617506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.617543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.617703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.617737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.617922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.617961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.618163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.618200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.618410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.618446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.618602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.618635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.618831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.618868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.619111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.619144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.619276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.619308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.619478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.619517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.619694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.619732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.619932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.619969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.620142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.620182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.620341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.620378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.620536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.620572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.620753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.620789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.620970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.621007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.621167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.621200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.621384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.621422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.621585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.621621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.621816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.621848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.622033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.622066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.622223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.622260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.622461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.622497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.622717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.665 [2024-07-15 08:04:23.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.665 qpair failed and we were unable to recover it. 00:37:32.665 [2024-07-15 08:04:23.622928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.622971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.623134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.623167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.623383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.623420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.623623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.623659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.623931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.623964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.624114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.624146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.624277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.624310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.624523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.624555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.624745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.624781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.624974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.625008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.625142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.625193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.625402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.625434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.625595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.625627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.625786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.625822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.626015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.626048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.626232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.626268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.626453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.626485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.626691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.626727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.626904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.626942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.627120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.627157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.627347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.627380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.627521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.627554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.627708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.627744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.627916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.627953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.628117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.628150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.628315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.628348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.628562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.628598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.628737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.628774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.628947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.628988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.629171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.629209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.629432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.629465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.629623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.629655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.629816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.629848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.630052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.630088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.630300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.630336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.630515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.630551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.630713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.630746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.630964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.631011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.631210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.631247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.631394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.631430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.666 [2024-07-15 08:04:23.631635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.666 [2024-07-15 08:04:23.631668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.666 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.631825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.631862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.632068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.632105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.632282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.632318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.632496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.632528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.632689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.632726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.632928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.632966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.633116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.633152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.633314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.633347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.633503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.633540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.633741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.633951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.633992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.634172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.634206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.634411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.634448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.634628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.634660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.634801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.634851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.635037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.635071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.635237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.635269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.635431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.635463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.635648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.635685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.635886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.635938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.636078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.636111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.636289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.636325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.636549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.636587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.636742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.636785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.637005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.637042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.637223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.637260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.637432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.637468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.637620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.637657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.637847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.637890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.638052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.638090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.638262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.638298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.638511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.638544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.638727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.638764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.638942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.638978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.639124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.639161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.639321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.639354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.639492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.639526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.639687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.639721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.639908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.639945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.640124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.667 [2024-07-15 08:04:23.640157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.667 qpair failed and we were unable to recover it. 00:37:32.667 [2024-07-15 08:04:23.640374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.640411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.640589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.640625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.640803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.640839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.641059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.641092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.641309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.641345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.641527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.641564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.641708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.641744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.641902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.641945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.642113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.642146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.642338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.642374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.642556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.642592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.642796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.642833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.643038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.643071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.643274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.643306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.643449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.643499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.643655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.643687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.643832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.643864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.644036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.644069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.644228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.644263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.644418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.644451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.644643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.644675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.644856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.644900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.645075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.645111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.645267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.645300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.645509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.645546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.645721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.645758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.645958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.645997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.646175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.646211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.646396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.646432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.646599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.646635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.646810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.646846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.647058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.647096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.647286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.647322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.647498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.647534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.647745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.647781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.647984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.648023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.648185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.648218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.648400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.648436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.648642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.648678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.648873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.648912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.668 [2024-07-15 08:04:23.649071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.668 [2024-07-15 08:04:23.649104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.668 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.649241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.649292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.649497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.649533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.649693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.649725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.649856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.649925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.650111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.650147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.650301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.650337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.650544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.650586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.650750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.650788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.650969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.651003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.651164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.651215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.651370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.651402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.651580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.651616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.651760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.651795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.652004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.652040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.652221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.652253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.652413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.652451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.652609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.652645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.652814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.652849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.653044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.653076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.653238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.653271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.653477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.653513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.653687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.653723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.653930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.653964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.654146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.654183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.654335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.654371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.654542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.654578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.654779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.654816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.655023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.655060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.655240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.655276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.655455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.655493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.655647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.655680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.655863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.655909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.656125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.656174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.656350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.656386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.669 [2024-07-15 08:04:23.656599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.669 [2024-07-15 08:04:23.656631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.669 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.656815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.656852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.657073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.657105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.657244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.657276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.657462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.657494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.657683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.657720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.657922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.657957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.658166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.658202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.658417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.658449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.658639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.658675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.658873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.658930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.659134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.659169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.659382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.659414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.659604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.659641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.659813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.659849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.660038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.660075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.660296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.660329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.660488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.660524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.660725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.660761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.660949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.660982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.661169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.661201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.661416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.661453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.661622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.661658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.661860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.661906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.662087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.662120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.662259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.662292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.662491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.662528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.662706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.662741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.662925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.662959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.663146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.663182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.663329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.663366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.663568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.663604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.663768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.663804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.663973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.664006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.664178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.664214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.664405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.664441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.664627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.664670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.664853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.664897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.665086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.665123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.665301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.665339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.665520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.670 [2024-07-15 08:04:23.665552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.670 qpair failed and we were unable to recover it. 00:37:32.670 [2024-07-15 08:04:23.665755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.665791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.665984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.666020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.666167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.666202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.666358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.666390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.666572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.666608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.666784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.666820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.666996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.667033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.667224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.667256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.667487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.667520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.667683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.667715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.667890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.667934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.668111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.668152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.668300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.668336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.668476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.668512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.668722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.668771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.668968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.669001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.669189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.669225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.669404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.669439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.669618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.669654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.669836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.669868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.670066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.670102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.670269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.670304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.670484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.670519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.670662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.670694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.670848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.670918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.671068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.671104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.671244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.671281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.671433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.671465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.671674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.671710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.671862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.671907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.672050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.672087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.672268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.672304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.672453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.672489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.672634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.672670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.672839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.672884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.673066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.673099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.673280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.673317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.673491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.673526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.673708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.673744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.673926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.673959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.671 [2024-07-15 08:04:23.674110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.671 [2024-07-15 08:04:23.674147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.671 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.674290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.674326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.674496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.674537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.674741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.674777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.674975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.675009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.675193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.675230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.675372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.675408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.675569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.675601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.675787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.675820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.676000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.676033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.676242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.676278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.676461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.676494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.676681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.676717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.676902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.676938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.677089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.677126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.677307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.677340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.677526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.677562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.677745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.677778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.677973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.678010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.678225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.678267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.678459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.678496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.678669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.678705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.678850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.678896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.679078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.679111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.679286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.679322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.679466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.679502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.679681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.679716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.679926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.679959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.680146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.680183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.680358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.680394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.680545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.680581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.680743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.680779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.680963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.681000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.681179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.681215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.681387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.681422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.681601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.681634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.681821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.681857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.682011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.682047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.682221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.682257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.682411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.682444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.672 [2024-07-15 08:04:23.682604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.672 [2024-07-15 08:04:23.682653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.672 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.682851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.682895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.683078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.683114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.683300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.683333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.683545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.683581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.683764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.683975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.684013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.684201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.684233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.684439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.684475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.684650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.684686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.684868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.685070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.685102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.685292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.685328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.685477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.685514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.685691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.685726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.685891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.685924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.686093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.686125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.686288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.686320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.686503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.686539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.686719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.686751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.686981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.687015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.687207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.687254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.687438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.687470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.687606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.687638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.687800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.687833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.688003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.688036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.688201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.688233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.688430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.688462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.688601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.688634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.688768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.688802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.689012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.689045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.689180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.689216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.689372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.689409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.689553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.689589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.689767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.689804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.690008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.690041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.690224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.690262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.690406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.690442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.690639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.690674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.690854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.690892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.691107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.673 [2024-07-15 08:04:23.691144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.673 qpair failed and we were unable to recover it. 00:37:32.673 [2024-07-15 08:04:23.691321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.691357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.691568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.691604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.691786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.691829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.692061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.692098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.692279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.692316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.692469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.692506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.692665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.692697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.692906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.692943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.693093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.693130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.693338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.693374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.693587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.693619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.693807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.693843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.694057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.694089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.694240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.694275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.694458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.694490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.694673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.694709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.694909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.694946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.695121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.695157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.695336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.695369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.695574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.695611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.695783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.695818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.696020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.696057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.696246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.696278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.696415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.696447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.696649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.696685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.696894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.696937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.697117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.697149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.697356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.697392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.697568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.697603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.697781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.697817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.698014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.698054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.698246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.698282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.698497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.698529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.698664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.698697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.698889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.674 [2024-07-15 08:04:23.698928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.674 qpair failed and we were unable to recover it. 00:37:32.674 [2024-07-15 08:04:23.699086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.699123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.699295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.699332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.699534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.699570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.699731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.699764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.699905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.699958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.700138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.700174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.700315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.700352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.700556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.700589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.700750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.700788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.700974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.701012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.701158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.701194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.701371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.701404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.701562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.701600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.701774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.701810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.701976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.702019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.702178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.702211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.702354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.702404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.702556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.702593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.702793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.702829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.703050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.703083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.703231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.703267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.703453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.703489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.703644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.703679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.703849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.703910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.704082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.704115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.704307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.704343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.704496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.704532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.704752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.704784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.704949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.704986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.705158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.705195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.705410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.705448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.705629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.705671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.705874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.705920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.706075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.706111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.706312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.706348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.706568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.706604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.706812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.706849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.707056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.707089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.707271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.707308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.707493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.707526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.675 [2024-07-15 08:04:23.707717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.675 [2024-07-15 08:04:23.707750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.675 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.707914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.707951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.708135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.708171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.708331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.708363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.708499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.708551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.708688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.708723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.708898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.708935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.709123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.709155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.709290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.709340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.709556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.709589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.709753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.709786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.709946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.709979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.710196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.710256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.710459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.710495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.710687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.710719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.710888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.710921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.711100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.711324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.711357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.711535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.711571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.711756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.711788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.711939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.711977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.712162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.712198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.712408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.712596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.712629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.712771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.712808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.712997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.713034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.713236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.713272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.713459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.713491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.713677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.713713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.713921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.713954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.714113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.714146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.714316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.714348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.714517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.714549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.714751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.714787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.714979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.715015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.715190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.715227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.715410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.715442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.715602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.715652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.715799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.715835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.716041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.716265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.676 [2024-07-15 08:04:23.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.676 qpair failed and we were unable to recover it. 00:37:32.676 [2024-07-15 08:04:23.716442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.716478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.716652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.716688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.716891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.716925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.717139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.717175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.717380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.717620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.717656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.717806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.717838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.718043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.718080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.718259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.718295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.718448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.718484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.718640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.718673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.718889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.718934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.719134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.719170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.719367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.719399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.719586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.719628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.719824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.719857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.720086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.720122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.720271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.720306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.720464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.720496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.720652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.720689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.720837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.720872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.721045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.721089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.721272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.721305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.721477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.721514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.721662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.721699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.721901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.721939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.722100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.722132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.722339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.722375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.722547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.722582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.722757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.722793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.722942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.722975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.723157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.723194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.723393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.723429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.723600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.723637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.723860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.723901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.724093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.724130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.724323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.724369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.724553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.724586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.724795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.724831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.725023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.725057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.677 [2024-07-15 08:04:23.725240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.677 [2024-07-15 08:04:23.725276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.677 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.725445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.725481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.725687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.725719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.725893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.725932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.726107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.726143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.726330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.726366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.726575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.726607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.726854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.726902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.727102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.727135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.727327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.727363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.727551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.727584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.727735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.727772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.727948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.727996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.728193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.728226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.728387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.728419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.728617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.728654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.728800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.728836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.729027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.729063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.729251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.729283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.729495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.729531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.729691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.729736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.729917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.729955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.730145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.730178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.730366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.730402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.730582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.730619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.730771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.730807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.730997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.731030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.678 qpair failed and we were unable to recover it. 00:37:32.678 [2024-07-15 08:04:23.731198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.678 [2024-07-15 08:04:23.731237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.731433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.731470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.731646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.731682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.731863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.731909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.732133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.732174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.732348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.732384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.732581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.732617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.732773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.732817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.733039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.733077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.733254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.733290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.733474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.733510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.733672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.733715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.733930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.733976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.734149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.734187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.734347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.734381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.734571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.734616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.734800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.734837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.735037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.735234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.735269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.735453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.735485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.735662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.735699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.735891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.735929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.736095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.736128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.736328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.736360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.736514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.736551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.736753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.736789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.736969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.737014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.737170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.737203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.737342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.737395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.737600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.737635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.737833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.737868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.738076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.738109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.738354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.738391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.738572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.738608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.738800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.738841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.739052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.739085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.739239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.739276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.739447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.739483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.739629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.739673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.739824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.739857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.740036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.679 [2024-07-15 08:04:23.740069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.679 qpair failed and we were unable to recover it. 00:37:32.679 [2024-07-15 08:04:23.740235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.740267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.740451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.740487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.740696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.740728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.740900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.740955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.741124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.741160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.741350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.741386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.741596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.741628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.741847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.741893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.742121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.742180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.742378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.742414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.742621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.742655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.742839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.742884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.743077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.743114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.743305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.743343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.743525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.743558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.743733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.743770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.743939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.743972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.744130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.744163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.744360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.744393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.744550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.744587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.744807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.744844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.745029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.745067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.745236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.745268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.745476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.745513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.745726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.745769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.745930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.745975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.746153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.746186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.746381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.746418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.746595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.746631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.746816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.746848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.747029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.747070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.747315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.747352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.747532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.747568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.747725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.747767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.747956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.747999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.748159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.748196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.748421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.748470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.748677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.748712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.748884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.748918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.749124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.749160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.680 qpair failed and we were unable to recover it. 00:37:32.680 [2024-07-15 08:04:23.749313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.680 [2024-07-15 08:04:23.749348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.749522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.749559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.749724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.749757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.749967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.750005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.750185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.750221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.750410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.750446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.750662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.750694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.750893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.750937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.751113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.751147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.751338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.751374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.751556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.751588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.751803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.752051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.752084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.752244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.752281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.752474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.752508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.752685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.752722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.752897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.752937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.753107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.753143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.753326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.753358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.753579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.753626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.753837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.753873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.754130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.754169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.754303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.754335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.754499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.754531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.754734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.754771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.754941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.754979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.755135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.755175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.755372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.755409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.755583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.755619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.755769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.755805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.755998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.756032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.756170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.756203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.756362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.756398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.756566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.756603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.756776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.756812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.756984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.757018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.757233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.757271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.757452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.757488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.757649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.757681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.757843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.757885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.681 [2024-07-15 08:04:23.758108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.681 [2024-07-15 08:04:23.758145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.681 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.758340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.758377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.758556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.758589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.758777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.758814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.759004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.759041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.759213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.759250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.759477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.759511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.759700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.759736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.759905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.759941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.760124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.760160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.760344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.760379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.760569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.760606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.760755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.760792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.760953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.760989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.761176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.761209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.761368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.761404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.761581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.761613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.761800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.761841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.762078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.762121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.762317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.762354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.762570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.762607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.762759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.762795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.762983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.763016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.763228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.763265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.763410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.763448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.763626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.763663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.763816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.763848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.763996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.764050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.764197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.764240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.764395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.764438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.764630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.764663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.764823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.764855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.765025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.765058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.765204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.765245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.765416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.765448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.765631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.765673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.682 qpair failed and we were unable to recover it. 00:37:32.682 [2024-07-15 08:04:23.765854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.682 [2024-07-15 08:04:23.765901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.766072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.766108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.766299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.766331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.766541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.766577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.766771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.766803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.766961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.767012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.767192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.767225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.767414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.767451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.767639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.767685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.767844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.767890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.768072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.768105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.768290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.768326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.768475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.768510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.768716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.768754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.768913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.768958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.769208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.769245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.769447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.769483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.769693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.769726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.769904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.769938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.770137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.770170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.770330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.770378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.770537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.770574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.770785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.770822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.771027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.771060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.771268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.771305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.771470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.771505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.771663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.771697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.771845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.771887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.772047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.772080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.772292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.772328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.772515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.772548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.772685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.772718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.772937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.772974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.773124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.773159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.773313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.773346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.773517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.773550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.773722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.773755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.773943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.773996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.774195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.774228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.774413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.774449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.683 qpair failed and we were unable to recover it. 00:37:32.683 [2024-07-15 08:04:23.774627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.683 [2024-07-15 08:04:23.774663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.774818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.774866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.775044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.775077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.775258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.775294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.775478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.775511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.775705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.775754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.775916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.775976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.776185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.776222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.776379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.776416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.776608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.776647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.776847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.776886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.777069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.777106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.777320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.777357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.777558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.777593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.777746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.777789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.777952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.778002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.778181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.778217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.778391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.778427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.778582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.778830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.778867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.779067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.779103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.779287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.779331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.779521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.779554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.779738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.779775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.779951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.779988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.780192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.780228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.780425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.780459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.780642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.780678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.780862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.780914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.781086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.781122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.781316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.781349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.781534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.781571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.781718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.781753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.781925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.781962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.782142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.782174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.782381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.782419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.782580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.782617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.782790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.782831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.783048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.783080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.783264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.783300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.783470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.684 [2024-07-15 08:04:23.783507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.684 qpair failed and we were unable to recover it. 00:37:32.684 [2024-07-15 08:04:23.783650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.783685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.783860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.783924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.784126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.784178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.784351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.784387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.784587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.784623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.784833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.784870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.785059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.785093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.785303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.785339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.785513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.785550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.785794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.785831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.786063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.786096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.786278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.786314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.786461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.786497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.786672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.786704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.786915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.786966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.787152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.787189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.787528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.787596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.787794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.787827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.787996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.788029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.788213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.788249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.788442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.788491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.788679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.788713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.788903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.788940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.789119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.789155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.789308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.789344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.789529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.789561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.789749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.789795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.790014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.790052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.790247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.790284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.790468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.790511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.790721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.790758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.790923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.790959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.791134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.791170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.791319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.791351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.791532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.791569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.791743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.791779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.791956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.791994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.792161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.792195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.792337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.792370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.685 qpair failed and we were unable to recover it. 00:37:32.685 [2024-07-15 08:04:23.792532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.685 [2024-07-15 08:04:23.792582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.792754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.792791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.792975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.793009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.793172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.793208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.793412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.793448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.793624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.793660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.793845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.793884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.794071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.794107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.794318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.794350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.794547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.794597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.794797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.794830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.795029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.795066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.795246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.795282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.795422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.795458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.795643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.795674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.795855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.795907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.796064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.796102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.796278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.796314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.796477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.796509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.796689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.796725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.796937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.796970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.797153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.797185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.797357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.797389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.797618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.797655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.797841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.797893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.798080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.798302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.798334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.798469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.798502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.798701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.798737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.798923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.798968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.799162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.799196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.799387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.799423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.799598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.799634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.799806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.799843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.800035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.800067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.800226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.800262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.800413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.800450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.800633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.800674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.800856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.800897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.801036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.801087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.686 [2024-07-15 08:04:23.801267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.686 [2024-07-15 08:04:23.801303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.686 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.801480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.801516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.801698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.801730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.801913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.801951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.802139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.802177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.802359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.802395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.802579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.802612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.802815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.802853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.803076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.803113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.803250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.803286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.803501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.803533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.803722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.803758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.803942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.803994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.804212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.804255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.804445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.804488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.804700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.804736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.804914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.804950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.805120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.805156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.805309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.805343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.805521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.805557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.805738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.805778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.805925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.805968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.806188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.806220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.806379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.806416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.806568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.806603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.806781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.806817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.807001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.807035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.807244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.807281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.807455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.807490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.807660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.807696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.807887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.807920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.808126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.808163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.808376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.808409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.808571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.808603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.808744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.808777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.808941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.687 [2024-07-15 08:04:23.808993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.687 qpair failed and we were unable to recover it. 00:37:32.687 [2024-07-15 08:04:23.809184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.809221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.809376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.809417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.809600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.809633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.809847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.809904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.810088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.810124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.810287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.810325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.810521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.810554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.810740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.810777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.810960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.810993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.811126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.811158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.811322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.811354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.811538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.811574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.811745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.811793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.812010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.812047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.812214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.812247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.812433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.812470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.812643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.812679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.812823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.812859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.813046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.813078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.813274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.813316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.813497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.813534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.813738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.813772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.813910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.813943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.814154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.814190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.814401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.814438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.814638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.814674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.814840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.814874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.815035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.815084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.815263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.815299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.815449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.815485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.815666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.815698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.815915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.815952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.816137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.816169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.816323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.816364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.816599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.816632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.816802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.816839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.817032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.817065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.817211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.817247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.817428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.817460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.817640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.817676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.688 qpair failed and we were unable to recover it. 00:37:32.688 [2024-07-15 08:04:23.817850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.688 [2024-07-15 08:04:23.817895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.818074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.818116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.818302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.818345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.818536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.818587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.818735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.818771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.818925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.818962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.819143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.819175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.819362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.819407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.819560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.819597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.819773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.819810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.819964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.820208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.820245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.820423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.820460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.820625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.820661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.820819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.820852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.821054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.821092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.821296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.821333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.821481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.821518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.821701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.821733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.821914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.821952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.822147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.822183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.822364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.822409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.822611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.822645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.822813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.822846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.822993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.823028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.823215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.823251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.823454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.823486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.823635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.823673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.823859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.823902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.824072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.824105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.824267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.824299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.824501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.824538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.824710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.824747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.824901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.824938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.825127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.825159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.825347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.825395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.825602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.825639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.825839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.825875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.826053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.826086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.826290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.826326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.689 [2024-07-15 08:04:23.826466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.689 [2024-07-15 08:04:23.826502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.689 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.826654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.826696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.826919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.826952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.827116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.827153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.827330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.827368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.827546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.827581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.827766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.827798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.827977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.828014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.828215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.828262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.828406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.828443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.828629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.828662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.828841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.828888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.829100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.829136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.829324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.829357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.829555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.829588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.829778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.829815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.830024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.830060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.830237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.830273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.830447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.830480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.830637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.830673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.830896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.830935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.831080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.831116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.831321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.831354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.831543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.831580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.831749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.831784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.831947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.831986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.832191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.832246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.832437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.832473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.832629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.832666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.832830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.832890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.833072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.833105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.833283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.833320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.833523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.833572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.833753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.833789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.833964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.833997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.834133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.834165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.834327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.834359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.834575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.834611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.834821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.834858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.835052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.835086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.835269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.690 [2024-07-15 08:04:23.835307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.690 qpair failed and we were unable to recover it. 00:37:32.690 [2024-07-15 08:04:23.835454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.835491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.835669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.835701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.835837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.835902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.836107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.836143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.836319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.836355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.836535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.836569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.836772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.836810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.837017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.837051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.837257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.837293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.837498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.837531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.837691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.837728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.837939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.837977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.838131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.838168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.838340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.838374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.838584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.838621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.838778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.838815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.839010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.839047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.839229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.839261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.839425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.839473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.839682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.839719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.839941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.839982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.840145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.840178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.840311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.840363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.840570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.840602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.840782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.840820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.841018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.841053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.841242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.841279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.841447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.841488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.841676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.841715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.841899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.841934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.842099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.842138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.842324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.842361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.842512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.842703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.842736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.842914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.842962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.691 qpair failed and we were unable to recover it. 00:37:32.691 [2024-07-15 08:04:23.843146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.691 [2024-07-15 08:04:23.843183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.843389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.843427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.843576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.843608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.843812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.843848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.844056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.844097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.844304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.844340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.844524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.844557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.844725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.844774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.844953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.844990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.845139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.845176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.845355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.845387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.845601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.845638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.845808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.845856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.846062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.846099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.846258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.846301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.846466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.846498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.846711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.846748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.846908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.846945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.847106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.847139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.847279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.847330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.847531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.847568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.847744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.847781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.847994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.848026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.848244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.848281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.848480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.848516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.848690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.848725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.848964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.848998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.849183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.849219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.849421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.849457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.849633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.849669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.849874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.849913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.850113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.850146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.850302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.850355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.850580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.850618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.850858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.850918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.851098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.851131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.851342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.851379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.851565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.851601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.851747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.851779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.851963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.852001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.692 qpair failed and we were unable to recover it. 00:37:32.692 [2024-07-15 08:04:23.852187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.692 [2024-07-15 08:04:23.852224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.852413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.852449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.852669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.852702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.852886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.852923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.853106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.853143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.853319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.853356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.853517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.853550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.853773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.853810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.854015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.854052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.854239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.854275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.854485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.854517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.854667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.854703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.854904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.854943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.855155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.855192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.855367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.855400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.855623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.855659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.855799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.855835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.856022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.856058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.856220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.856253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.856442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.856480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.856667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.856704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.856906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.856956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.857121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.857153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.857318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.857351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.857488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.857538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.857724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.857770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.857969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.858003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.858194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.858230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.858413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.858446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.858608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.858659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.858837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.858870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.859046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.859080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.859241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.859280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.859436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.859469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.859652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.859684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.859887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.859926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.860101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.860137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.860321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.860358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.860562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.860605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.860768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.860812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.861041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.693 [2024-07-15 08:04:23.861074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.693 qpair failed and we were unable to recover it. 00:37:32.693 [2024-07-15 08:04:23.861249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.861287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.861438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.861471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.861635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.861856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.861901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.862079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.862115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.862301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.862336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.862547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.862584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.862731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.862768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.862982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.863019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.863211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.863243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.863452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.863489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.863696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.863733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.863901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.863953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.864144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.864177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.864375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.864411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.864631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.864663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.864854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.864894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.865143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.865177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.865413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.865446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.865578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.865611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.865778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.865812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.866020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.866053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.866197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.866230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.866395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.866446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.866647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.866683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.866837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.866873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.867108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.867159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.867382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.867419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.867625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.867688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.867846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.867889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.868049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.868086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.868263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.868305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.868629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.868702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.868894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.868929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.869122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.869165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.869350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.869386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.869537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.869573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.869734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.869765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.869954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.694 [2024-07-15 08:04:23.869991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.694 qpair failed and we were unable to recover it. 00:37:32.694 [2024-07-15 08:04:23.870174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.870207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-15 08:04:23.870394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.870452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-15 08:04:23.870610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.870643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-15 08:04:23.870776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.870844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-15 08:04:23.871051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.871083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-15 08:04:23.871247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.871289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.695 qpair failed and we were unable to recover it. 00:37:32.695 [2024-07-15 08:04:23.871459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.695 [2024-07-15 08:04:23.871492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.971 qpair failed and we were unable to recover it. 00:37:32.971 [2024-07-15 08:04:23.871659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.971 [2024-07-15 08:04:23.871693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.971 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.871821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.871854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.872028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.872066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.872277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.872311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.872524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.872561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.872750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.872782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.872947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.872990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.873151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.873185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.873404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.873442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.873593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.873644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.873795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.873831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.874023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.874056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.874250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.874286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.874459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.874495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.874673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.874710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.874857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.874911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.875109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.875146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.875294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.875330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.875541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.875577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.875735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.875770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.875951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.875988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.876198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.876234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.876425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.876459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.876626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.876660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.876817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.876854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.877041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.877082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.877262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.877300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.877532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.877566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.877749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.877786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.877950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.877984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.878182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.878218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.878433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.878465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.878620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.878657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.878813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.878862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.879069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.879105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.879317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.879350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.879502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.879539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.879742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.879778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.879952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.879990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.880195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.972 [2024-07-15 08:04:23.880237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.972 qpair failed and we were unable to recover it. 00:37:32.972 [2024-07-15 08:04:23.880450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.880484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.880692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.880729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.880919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.880952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.881141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.881174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.881332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.881369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.881568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.881603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.881807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.881843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.882021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.882056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.882263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.882300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.882496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.882532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.882711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.882746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.882929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.882962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.883134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.883167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.883355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.883398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.883600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.883636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.883826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.883859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.884064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.884114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.884329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.884361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.884562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.884612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.884795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.884827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.884995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.885029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.885217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.885250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.885451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.885487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.885643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.885676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.885860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.885907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.886078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.886119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.886294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.886330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.886538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.886571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.886754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.886790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.886964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.887172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.887208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.887388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.887421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.887630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.887666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.887843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.887903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.888108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.888145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.888334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.888367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.888527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.888578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.888749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.888785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.888976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.889014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.889230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.973 [2024-07-15 08:04:23.889273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.973 qpair failed and we were unable to recover it. 00:37:32.973 [2024-07-15 08:04:23.889434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.889471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.889641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.889678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.889884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.889921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.890112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.890144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.890322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.890359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.890607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.890644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.890909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.890953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.891148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.891181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.891359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.891407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.891613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.891649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.891901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.891938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.892163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.892196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.892384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.892421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.892569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.892604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.892777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.892813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.892990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.893029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.893195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.893232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.893410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.893447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.893619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.893654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.893816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.893849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.894067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.894107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.894326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.894362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.894569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.894605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.894822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.894855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.895042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.895080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.895286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.895327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.895502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.895537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.895771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.895808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.896010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.896052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.896216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.896252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.896416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.896451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.896609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.896641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.896850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.896897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.897056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.897089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.897252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.897288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.897471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.897515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.897704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.897741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.897916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.897952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.898125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.898161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.898345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.974 [2024-07-15 08:04:23.898377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.974 qpair failed and we were unable to recover it. 00:37:32.974 [2024-07-15 08:04:23.898562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.898599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.898777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.898825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.899001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.899038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.899250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.899288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.899488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.899525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.899699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.899734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.899932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.899969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.900160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.900192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.900418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.900456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.900637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.900689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.900855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.900898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.901100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.901131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.901311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.901348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.901523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.901559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.901734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.901783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.901977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.902238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.902305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.902483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.902519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.902725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.902762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.902977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.903011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.903230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.903295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.903439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.903475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.903634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.903672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.903852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.903906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.904138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.904186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.904401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.904778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.904838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.905033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.905066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.905331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.905391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.905565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.905606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.905800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.905836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.906007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.906040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.906323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.906379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.906556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.906592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.906773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.906809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.906997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.907037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.907185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.907219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.907414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.907464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.907639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.907675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.975 [2024-07-15 08:04:23.907870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.975 [2024-07-15 08:04:23.907920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.975 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.908105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.908143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.908334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.908369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.908723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.908778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.908948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.908982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.909169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.909207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.909384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.909420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.909665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.909722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.909908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.909942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.910134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.910170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.910316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.910353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.910569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.910642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.910834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.910870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.911118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.911155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.911299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.911334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.911487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.911523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.911704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.911737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.911918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.911955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.912129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.912166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.912360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.912394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.912558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.912590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.912745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.912778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.912917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.912967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.913201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.913471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.913503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.913679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.913713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.913934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.913977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.914134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.914171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.914321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.914353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.914517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.914549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.914710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.914742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.914870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.914911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.915067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.915100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.976 qpair failed and we were unable to recover it. 00:37:32.976 [2024-07-15 08:04:23.915283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.976 [2024-07-15 08:04:23.915322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.915497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.915534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.915684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.915719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.915898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.915931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.916110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.916146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.916330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.916363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.916553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.916601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.916793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.916826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.916977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.917010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.917165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.917197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.917409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.917465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.917649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.917681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.917897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.917950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.918117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.918150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.918458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.918491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.918679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.918722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.918913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.918951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.919158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.919196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.919409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.919441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.919599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.919631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.919812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.919848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.920046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.920080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.920292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.920328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.920510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.920543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.920713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.920746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.920949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.920985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.921202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.921239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.921406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.921440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.921613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.921650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.921829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.921866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.922054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.922091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.922273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.922305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.922509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.922545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.922728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.922765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.922890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.922923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.923113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.923147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.923332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.923368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.923583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.923620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.923758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.923794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.923984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.924018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.924174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.977 [2024-07-15 08:04:23.924212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.977 qpair failed and we were unable to recover it. 00:37:32.977 [2024-07-15 08:04:23.924361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.924397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.924586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.924620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.924805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.924849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.925027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.925068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.925245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.925282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.925433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.925469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.925658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.925691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.925851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.925895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.926085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.926117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.926282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.926314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.926447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.926480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.926692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.926728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.926940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.926974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.927114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.927147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.927345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.927377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.927592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.927628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.927803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.927839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.928061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.928094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.928278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.928315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.928482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.928519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.928721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.928757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.928935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.928972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.929155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.929187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.929388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.929425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.929569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.929606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.929822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.929854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.930006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.930039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.930224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.930260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.930459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.930495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.930694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.930730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.930943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.930976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.931188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.931224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.931397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.931436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.931631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.931681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.931867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.931907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.932067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.932102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.932273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.932309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.932526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.932562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.932743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.932785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.932980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.978 [2024-07-15 08:04:23.933018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.978 qpair failed and we were unable to recover it. 00:37:32.978 [2024-07-15 08:04:23.933195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.933231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.933410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.933446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.933630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.933663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.933793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.933826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.934021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.934054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.934234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.934270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.934425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.934458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.934629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.934665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.934831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.934867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.935056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.935092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.935251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.935283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.935457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.935494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.935679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.935711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.935869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.935911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.936065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.936098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.936226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.936259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.936411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.936444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.936625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.936675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.936855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.936901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.937099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.937132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.937316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.937352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.937531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.937567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.937720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.937752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.937896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.937950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.938120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.938156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.938330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.938366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.938577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.938609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.938792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.938828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.938995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.939028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.939233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.939285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.939502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.939535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.939718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.939754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.939898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.939940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.940117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.940153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.940306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.940339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.940545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.940581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.940728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.940765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.940915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.940952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.941132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.941165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.941388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.941425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.979 qpair failed and we were unable to recover it. 00:37:32.979 [2024-07-15 08:04:23.941644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.979 [2024-07-15 08:04:23.941680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.941847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.941892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.942098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.942130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.942340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.942376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.942547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.942582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.942732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.942769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.942958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.942991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.943134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.943185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.943357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.943393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.943573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.943609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.943794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.943827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.944020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.944057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.944230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.944267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.944441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.944477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.944667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.944699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.944856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.944897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.945085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.945121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.945299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.945335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.945521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.945553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.945736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.945777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.945922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.945958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.946115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.946151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.946328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.946370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.946528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.946565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.946716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.946752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.946898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.946935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.947111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.947142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.947295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.947331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.947532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.947568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.947705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.947740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.947919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.947952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.948134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.948170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.948357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.948392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.948576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.948612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.980 [2024-07-15 08:04:23.948758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.980 [2024-07-15 08:04:23.948791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.980 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.948955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.948988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.949163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.949199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.949348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.949383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.949569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.949601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.949778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.949812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.950047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.950080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.950271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.950304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.950461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.950494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.950651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.950699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.950897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.950946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.951108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.951140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.951336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.951368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.951508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.951557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.951734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.951770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.951970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.952004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.952188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.952372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.952408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.952579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.952616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.952797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.952833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.953026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.953058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.953222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.953258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.953404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.953440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.953624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.953660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.953837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.953869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.954089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.954130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.954332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.954368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.954545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.954580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.954760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.954792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.954954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.954991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.955175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.955211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.955416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.955452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.955682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.955715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.955943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.955976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.956136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.956170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.956328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.956364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.956569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.956600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.956784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.956820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.957008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.957042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.957200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.957237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.981 [2024-07-15 08:04:23.957417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.981 [2024-07-15 08:04:23.957450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.981 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.957600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.957636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.957819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.957855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.958015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.958052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.958236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.958268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.958454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.958491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.958661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.958697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.958904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.958940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.959156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.959188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.959374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.959410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.959583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.959618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.959775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.959812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.960019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.960063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.960243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.960279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.960430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.960466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.960613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.960649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.960824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.960856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.961051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.961087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.961273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.961309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.961501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.961536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.961748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.961780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.961964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.962000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.962173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.962205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.962367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.962417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.962601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.962633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.962815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.962858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.963038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.963075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.963246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.963282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.963441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.963473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.963632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.963664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.963884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.963934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.964097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.964129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.964358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.964390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.964546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.964582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.964795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.964827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.965020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.965069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.965228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.965260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.965400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.965451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.965650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.965686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.965859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.965906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.982 [2024-07-15 08:04:23.966071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.982 [2024-07-15 08:04:23.966103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.982 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.966242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.966274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.966407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.966439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.966627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.966663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.966845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.966891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.967051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.967085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.967224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.967259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.967442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.967477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.967657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.967689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.967902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.967939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.968090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.968126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.968300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.968336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.968521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.968554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.968740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.968777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.968989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.969026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.969201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.969237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.969412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.969445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.969655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.969707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.969915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.969947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.970151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.970187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.970396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.970428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.970615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.970651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.970825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.970861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.971015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.971051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.971242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.971274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.971417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.971457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.971610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.971646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.971798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.971834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.972024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.972056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.972232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.972268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.972445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.972480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.972644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.972680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.972871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.972911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.973088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.973124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.973262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.973297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.973503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.973539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.973722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.973765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.973980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.974016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.974189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.974225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.974383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.974419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.974596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.983 [2024-07-15 08:04:23.974628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.983 qpair failed and we were unable to recover it. 00:37:32.983 [2024-07-15 08:04:23.974837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.974874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.975092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.975129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.975313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.975345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.975504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.975536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.975694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.975727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.975924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.975957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.976112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.976144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.976335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.976381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.976565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.976617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.976769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.976805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.976980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.977027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.977249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.977282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.977463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.977499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.977670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.977706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.977987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.978020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.978212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.978244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.978425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.978462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.978634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.978670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.978816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.978862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.979010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.979043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.979178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.979211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.979373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.979412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.979599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.979640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.979797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.979829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.980021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.980063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.980265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.980301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.980499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.980536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.980688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.980721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.980854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.980915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.981122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.981164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.981345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.981381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.981560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.981592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.981784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.981820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.982006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.982039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.982247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.982298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.982501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.982534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.982712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.982745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.984 [2024-07-15 08:04:23.982965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.984 [2024-07-15 08:04:23.983003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.984 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.983183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.983220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.983405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.983437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.983614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.983650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.983857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.983905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.984091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.984128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.984310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.984342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.984552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.984590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.984796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.984832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.985017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.985053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.985236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.985268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.985451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.985488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.985703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.985740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.985922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.985959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.986149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.986181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.986318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.986351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.986558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.986614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.986837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.986872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.987054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.987087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.987238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.987274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.987407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.987443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.987623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.987668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.987838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.987891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.988055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.988091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.988295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.988329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.988501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.988537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.988774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.988810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.988985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.989022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.989234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.989285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.989464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.989501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.989684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.989716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.989906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.989957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.990091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.990123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.990280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.990312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.990472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.990505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.990682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.990724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.990920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.985 [2024-07-15 08:04:23.990957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.985 qpair failed and we were unable to recover it. 00:37:32.985 [2024-07-15 08:04:23.991111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.991148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.991329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.991361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.991510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.991547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.991720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.991756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.991929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.991966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.992130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.992163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.992376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.992413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.992563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.992599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.992773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.992824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.992998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.993031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.993196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.993245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.993424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.993460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.993676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.993711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.993900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.993939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.994120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.994153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.994309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.994342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.994556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.994592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.994809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.994842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.994991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.995024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.995227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.995278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.995459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.995496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.995672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.995704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.995908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.995947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.996134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.996170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.996318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.996354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.996506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.996538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.996700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.996732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.996873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.996915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.997099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.997135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.997351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.997383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.997576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.997617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.997828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.997864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.998083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.998120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.998267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.998306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.998480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.998527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.998711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.998747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.998909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.998961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.999124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.999157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.999380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.999417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.999560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.986 [2024-07-15 08:04:23.999597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.986 qpair failed and we were unable to recover it. 00:37:32.986 [2024-07-15 08:04:23.999747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:23.999783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:23.999963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:23.999996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.000207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.000243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.000384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.000420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.000630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.000671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.000838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.000871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.001067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.001105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.001302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.001338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.001554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.001591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.001765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.001829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.002048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.002085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.002265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.002300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.002459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.002495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.002676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.002708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.002883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.002917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.003094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.003128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.003318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.003354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.003546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.003579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.003794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.003830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.004033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.004067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.004254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.004306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.004507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.004540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.004731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.004768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.004971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.005009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.005190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.005228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.005412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.005445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.005601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.005636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.005800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.005837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.006051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.006088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.006308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.006342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.006514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.006556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.006719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.006756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.006933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.006970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.007156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.007189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.007369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.007405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.007590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.007626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.007766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.007802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.007978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.008012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.008218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.008254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.008431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.008467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.987 qpair failed and we were unable to recover it. 00:37:32.987 [2024-07-15 08:04:24.008645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.987 [2024-07-15 08:04:24.008682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.008865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.008912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.009125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.009161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.009334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.009384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.009547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.009584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.009794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.009826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.010016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.010057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.010271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.010307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.010474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.010506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.010666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.010698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.010901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.010940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.011139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.011175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.011356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.011394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.011577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.011610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.011790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.011826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.012021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.012054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.012200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.012236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.012425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.012469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.012683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.012719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.012899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.012937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.013118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.013155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.013364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.013396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.013608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.013644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.013787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.013823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.014013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.014051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.014266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.014300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.014500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.014536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.014741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.014777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.014951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.014988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.015167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.015200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.015351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.015393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.015609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.015643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.015817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.015850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.016049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.016092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.016264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.016297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.016512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.016548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.016722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.016757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.016916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.016960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.017139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.017191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.017385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.988 [2024-07-15 08:04:24.017419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.988 qpair failed and we were unable to recover it. 00:37:32.988 [2024-07-15 08:04:24.017596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.017631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.017836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.017868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.018067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.018103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.018268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.018304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.018502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.018541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.018716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.018749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.018911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.018959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.019166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.019203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.019402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.019437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.019626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.019658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.019869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.019916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.020129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.020166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.020345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.020381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.020598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.020631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.020818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.020854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.021038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.021075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.021277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.021313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.021542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.021575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.021744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.021779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.021974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.022011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.022152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.022189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.022363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.022395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.022579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.022615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.022828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.022874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.023063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.023115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.023300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.023334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.023536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.023587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.023764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.023799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.023946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.023982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.024140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.024172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.024311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.024367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.024546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.024590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.024747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.024783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.024942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.024975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.025170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.989 [2024-07-15 08:04:24.025207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.989 qpair failed and we were unable to recover it. 00:37:32.989 [2024-07-15 08:04:24.025346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.025384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.025586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.025622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.025798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.025830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.026049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.026100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.026245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.026281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.026489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.026526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.026692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.026726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.026935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.026971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.027134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.027170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.027340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.027375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.027542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.027575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.027768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.027805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.028015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.028052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.028197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.028234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.028388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.028421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.028548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.028581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.028779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.028815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.029035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.029080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.029243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.029276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.029455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.029491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.029662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.029698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.029847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.029891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.030070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.030113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.030261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.030298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.030475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.030520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.030688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.030722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.030916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.030950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.031115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.031163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.031336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.031373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.031583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.031619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.031830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.031863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.032065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.032102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.032280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.032324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.032538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.032574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.032765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.032797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.032993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.033036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.033179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.033216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.033383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.033421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.033624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.033657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.033813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.990 [2024-07-15 08:04:24.033850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.990 qpair failed and we were unable to recover it. 00:37:32.990 [2024-07-15 08:04:24.034055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.034088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.034246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.034284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.034486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.034519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.034652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.034686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.034859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.034906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.035109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.035145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.035330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.035362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.035543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.035578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.035784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.035822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.036001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.036038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.036196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.036229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.036434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.036471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.036672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.036708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.036851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.036898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.037111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.037144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.037302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.037349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.037553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.037589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.037739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.037778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.037962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.037996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.038151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.038186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.038357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.038393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.038584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.038620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.038776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.038810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.038997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.039035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.039238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.039283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.039496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.039533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.039722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.039754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.039911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.039948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.040139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.040171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.040337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.040380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.040567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.040600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.040785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.040829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.041027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.041060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.041212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.041249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.041436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.041468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.041610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.041648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.041835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.041891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.042060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.042092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.042223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.042256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.042439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.991 [2024-07-15 08:04:24.042477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.991 qpair failed and we were unable to recover it. 00:37:32.991 [2024-07-15 08:04:24.042655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.042692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.042842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.042889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.043080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.043113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.043249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.043282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.043444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.043501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.043712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.043748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.043961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.044004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.044185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.044222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.044401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.044437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.044599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.044635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.044814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.044846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.045048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.045086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.045262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.045298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.045498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.045534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.045720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.045752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.045912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.045949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.046124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.046161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.046312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.046347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.046535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.046568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.046730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.046763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.046958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.047004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.047205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.047241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.047428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.047460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.047628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.047660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.047890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.047938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.048163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.048200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.048388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.048427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.048609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.048645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.048854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.048900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.049053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.049088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.049269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.049312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.049500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.049546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.049730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.049766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.049941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.049978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.050163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.050196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.050359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.050396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.050579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.050614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.050824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.050861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.051037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.051071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.051251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.051287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.992 [2024-07-15 08:04:24.051432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.992 [2024-07-15 08:04:24.051469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.992 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.051670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.051721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.051939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.051972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.052195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.052231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.052410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.052447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.052634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.052671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.052858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.052899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.053035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.053068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.053255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.053293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.053497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.053533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.053688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.053721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.053933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.053970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.054321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.054533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.054569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.054798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.054834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.055025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.055058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.055270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.055306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.055467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.055508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.055682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.055715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.055884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.055938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.056098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.056130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.056290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.056326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.056541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.056573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.056760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.056797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.056996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.057034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.057205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.057242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.057425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.057458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.057607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.057644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.057817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.057853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.058044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.058081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.058238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.058281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.058415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.058479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.058687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.058723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.058933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.058968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.059176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.059208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.993 [2024-07-15 08:04:24.059399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.993 [2024-07-15 08:04:24.059434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.993 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.059604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.059639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.059804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.059839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.060025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.060059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.060196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.060229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.060414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.060446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.060632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.060669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.060852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.060891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.061071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.061109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.061334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.061370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.061513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.061549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.061746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.061779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.061973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.062010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.062211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.062246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.062407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.062443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.062628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.062663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.062854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.062914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.063091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.063128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.063329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.063365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.063524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.063557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.063739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.063776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.063943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.063980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.064132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.064183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.064403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.064437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.064653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.064690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.064890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.064924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.065102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.065138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.065345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.065381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.065591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.065628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.065811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.065845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.066036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.066069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.066198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.066231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.066394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.066427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.066590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.066624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.066776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.066983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.067017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.067232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.067285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.067432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.067487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.067641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.067677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.067929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.067963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.994 [2024-07-15 08:04:24.068101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.994 [2024-07-15 08:04:24.068133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.994 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.068294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.068344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.068512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.068547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.068696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.068728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.068869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.068926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.069154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.069191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.069376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.069412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.069596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.069629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.069766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.069799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.069959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.069992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.070127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.070161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.070298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.070330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.070528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.070578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.070724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.070761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.070966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.071002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.071148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.071181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.071363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.071401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.071579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.071615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.071838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.071874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.072046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.072089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.072280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.072317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.072521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.072557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.072698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.072734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.072902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.072935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.073121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.073158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.073370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.073408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.073555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.073592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.073774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.073811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.073982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.074020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.074208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.074240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.074441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.074475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.074646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.074680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.074862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.074909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.075099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.075135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.075355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.075390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.075603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.075636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.075820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.075857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.076045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.076078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.076238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.076467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.076501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.995 [2024-07-15 08:04:24.076709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.995 [2024-07-15 08:04:24.076746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.995 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.076934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.076970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.077141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.077176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.077351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.077383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.077566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.077603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.077776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.077812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.077966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.078009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.078191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.078224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.078361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.078394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.078574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.078626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.078791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.078827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.079034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.079077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.079235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.079273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.079472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.079508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.079691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.079728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.079973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.080008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.080189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.080226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.080371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.080407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.080581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.080617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.080824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.080857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.081126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.081164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.081343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.081379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.081565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.081601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.081760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.081793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.081960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.081993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.082191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.082223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.082379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.082412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.082651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.082697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.082858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.082904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.083073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.083110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.083276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.083312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.083528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.083561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.083745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.083781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.083957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.083996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.084139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.084175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.084360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.084392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.084548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.084594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.084799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.084835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.085011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.085048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.085202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.085234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.996 [2024-07-15 08:04:24.085432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.996 [2024-07-15 08:04:24.085483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.996 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.085684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.085719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.085905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.085942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.086150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.086194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.086382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.086420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.086620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.086656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.086830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.086866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.087022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.087055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.087225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.087277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.087464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.087497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.087657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.087706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.087893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.087926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.088137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.088173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.088323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.088359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.088567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.088603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.088764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.088797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.088987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.089020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.089208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.089244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.089418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.089454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.089609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.089642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.089853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.089897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.090046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.090083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.090259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.090295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.090507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.090539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.090732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.090768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.090933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.090969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.091154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.091190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.091377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.091414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.091601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.091638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.091811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.091847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.092032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.092069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.092281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.092313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.092502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.092539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.092743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.092779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.092987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.093020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.093179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.093212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.093395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.093431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.093616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.093652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.093865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.093933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.997 [2024-07-15 08:04:24.094065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.997 [2024-07-15 08:04:24.094098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.997 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.094264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.094297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.094442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.094474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.094660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.094692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.094893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.094927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.095119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.095156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.095327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.095368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.095540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.095576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.095792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.095824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.096016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.096052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.096230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.096266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.096441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.096477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.096628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.096661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.096845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.096892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.097071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.097107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.097263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.097299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.097483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.097517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.097684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.097717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.097897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.097934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.098141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.098173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.098316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.098348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.098511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.098544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.098725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.098761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.098969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.099006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.099155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.099187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.099369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.099406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.099558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.099594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.099749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.099786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.099947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.099993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.100179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.100216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.100367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.100402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.100562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.100598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.100751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.100784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.101000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.101036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.101210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.101246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.101446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.101482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.998 [2024-07-15 08:04:24.101664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.998 [2024-07-15 08:04:24.101696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.998 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.101902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.101953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.102126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.102163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.102337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.102372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.102525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.102556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.102723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.102755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.102923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.102956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.103116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.103149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.103372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.103405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.103563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.103600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.103799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.103836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.104014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.104050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.104213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.104246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.104399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.104436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.104577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.104614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.104790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.104826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.105023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.105056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.105214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.105247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.105404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.105441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.105624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.105660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.105911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.105945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.106081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.106113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.106265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.106302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.106472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.106508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.106717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.106749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.106904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.106941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.107123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.107158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.107338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.107374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.107593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.107625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.107837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.107873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.108071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.108106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.108255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.108290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.108504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.108541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.108706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.108743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.108914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.108950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.109117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.109153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.109375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.109407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.109613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.109649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.109831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.109867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.110055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.110087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.110224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.999 [2024-07-15 08:04:24.110256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.999 qpair failed and we were unable to recover it. 00:37:32.999 [2024-07-15 08:04:24.110396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.110428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.110588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.110620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.110784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.110816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.110981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.111014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.111200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.111236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.111386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.111422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.111603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.111639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.111819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.111851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.111995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.112028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.112212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.112244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.112435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.112471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.112628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.112661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.112872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.112915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.113123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.113159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.113389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.113426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.113618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.113660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.113841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.113885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.114061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.114098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.114252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.114288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.114496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.114529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.114721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.114758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.114933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.114969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.115147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.115183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.115337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.115369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.115556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.115593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.115796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.115832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.116016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.116052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.116231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.116264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.116469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.116505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.116680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.116716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.116921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.116958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.117117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.117153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.117282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.117315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.117506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.117541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.117714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.117751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.117968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.118001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.118220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.118290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.118471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.118507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.118709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.118745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.118933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.000 [2024-07-15 08:04:24.118966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.000 qpair failed and we were unable to recover it. 00:37:33.000 [2024-07-15 08:04:24.119171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.119207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.119393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.119425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.119589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.119622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.119818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.119851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.120040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.120077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.120261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.120297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.120501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.120536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.120714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.120747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.120891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.120924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.121082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.121114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.121243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.121275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.121436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.121468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.121657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.121693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.121861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.121905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.122062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.122098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.122281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.122313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.122464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.122506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.122710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.122746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.122903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.122940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.123118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.123151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.123294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.123327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.123508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.123540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.123698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.123733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.123912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.123945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.124106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.124140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.124359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.124395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.124570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.124605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.124815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.124847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.125012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.125050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.125227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.125263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.125438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.125474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.125654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.125693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.125852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.125891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.126074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.126110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.126297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.126329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.126459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.126491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.126659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.126692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.126884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.126921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.127107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.127144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.127325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.127367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.001 qpair failed and we were unable to recover it. 00:37:33.001 [2024-07-15 08:04:24.127576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.001 [2024-07-15 08:04:24.127613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.127793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.127829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.127990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.128027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.128192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.128224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.128361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.128412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.128664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.128700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.128871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.128916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.129106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.129138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.129307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.129339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.129523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.129560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.129739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.129775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.129950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.129984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.130192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.130228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.130379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.130416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.130620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.130656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.130807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.130840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.130984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.131037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.131235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.131271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.131478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.131515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.131699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.131731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.131911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.131947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.132133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.132166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.132299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.132346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.132501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.132534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.132717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.132754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.132951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.132987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.133162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.133199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.133387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.133419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.133628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.133664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.133846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.133891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.134049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.134085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.134244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.134282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.134464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.134500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.134676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.134712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.134858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.134903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.135081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.002 [2024-07-15 08:04:24.135114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.002 qpair failed and we were unable to recover it. 00:37:33.002 [2024-07-15 08:04:24.135290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.135326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.135510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.135545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.135698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.135734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.135944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.135976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.136196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.136232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.136404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.136440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.136643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.136679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.136864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.136903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.137105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.137141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.137353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.137389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.137537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.137573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.137807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.137842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.138008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.138041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.138201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.138234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.138448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.138480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.138638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.138670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.138852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.138900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.139098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.139134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.139310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.139346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.139495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.139528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.139691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.139724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.139908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.139945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.140155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.140367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.140399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.140537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.140569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.140723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.140772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.140964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.141001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.141177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.141220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.141441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.141473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.141629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.141662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.141869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.141913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.142069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.142102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.142283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.142320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.142520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.142556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.142730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.142766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.142943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.142980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.143157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.143193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.143367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.143404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.143603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.143639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.143860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.003 [2024-07-15 08:04:24.143901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.003 qpair failed and we were unable to recover it. 00:37:33.003 [2024-07-15 08:04:24.144126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.144158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.144292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.144326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.144520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.144556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.144778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.144810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.144979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.145016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.145192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.145228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.145405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.145441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.145606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.145639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.145794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.145830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.146026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.146060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.146215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.146252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.146452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.146485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.146638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.146674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.146846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.146889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.147072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.147107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.147297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.147329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.147511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.147547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.147717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.147753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.147938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.147971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.148125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.148157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.148309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.148345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.148546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.148582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.148756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:33.004 [2024-07-15 08:04:24.148996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.149045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.149237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.149272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.149464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.149515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.149673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.149724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.149895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.149929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.150088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.150314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.150364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.150554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.150606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.150773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.150806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.150991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.151024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.151191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.151243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.151461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.151512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.151653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.151687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.151902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.152100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.152156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.152309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.152362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.152553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.152604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.152762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.152795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.152981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.153034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.153191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.153241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.153477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.153530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.153694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.153733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.153931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.153966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.154164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.154201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.154393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.154429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.154613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.154657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.154830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.154867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.155051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.155084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.155270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.155307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.155487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.155523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.155674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.155710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.155894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.155952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.156135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.156168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.156321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.156357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.156563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.156600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.156839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.156884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.157073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.157105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.157299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.157335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.157534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.157571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.157768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.157805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.157981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.158014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.158180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.158212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.158392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.004 [2024-07-15 08:04:24.158427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.004 qpair failed and we were unable to recover it. 00:37:33.004 [2024-07-15 08:04:24.158581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.158618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.158817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.158853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.159053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.159085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.159246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.159282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.159519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.159556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.159769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.159805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.160006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.160039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.160231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.160263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.160421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.160457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.160633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.160669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.160858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.160897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.161065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.161097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.161302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.161337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.161566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.161602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.161815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.161850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.162053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.162086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.162253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.162285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.162472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.162508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.162663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.162700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.162853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.162891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.163089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.163122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.163315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.163367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.163572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.163608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.163778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.163819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.164037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.164252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.164284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.164449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.164486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.164654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.164690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.164897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.164930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.165067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.165099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.165238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.165270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.165441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.165477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.165657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.165693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.165884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.165936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.166110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.166158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.166328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.166381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.166584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.166636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.166809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.166854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.167055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.167088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.167278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.167329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.167517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.167568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.167732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.167766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.167936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.167971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.168168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.168205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.168383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.168419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.168613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.168650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.168830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.168862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.169057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.169099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.169300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.169336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.169511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.169548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.169733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.169770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.169989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.170023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.170209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.170463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.170517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.170686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.170738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.170931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.170966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.171133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.171183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.171354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.171388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.171554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.171591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.171795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.171828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.171980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.172014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.172214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.172248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.172412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.172444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.172607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.172644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.172783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.172815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.172986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.173018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.173198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.173236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.173405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.173440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.173583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.173619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.173823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.173859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.005 [2024-07-15 08:04:24.174096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.005 [2024-07-15 08:04:24.174144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.005 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.174301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.174354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.174518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.174570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.174758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.174792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.174927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.174961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.175157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.175210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.175376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.175414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.175591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.175627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.175785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.175821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.176005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.176039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.176190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.176227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.176367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.176403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.176561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.176597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.176768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.176804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.177006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.177039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.177199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.177231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.177474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.177510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.177677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.177713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.177964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.178177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.178213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.178442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.178478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.178673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.178709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.178868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.178949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.179132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.179165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.179342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.179378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.179545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.179581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.179737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.179774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.179972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.180020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.180188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.180241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.180465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.180517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.180665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.180716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.180916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.180951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.181110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.181163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.006 [2024-07-15 08:04:24.181388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.006 [2024-07-15 08:04:24.181430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.006 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.181575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.181611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.181819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.181855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.182085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.182118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.182306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.182343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.182549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.182585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.182769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.182801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.182942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.182975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.183144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.183198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.183373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.183409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.183553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.329 [2024-07-15 08:04:24.183589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.329 qpair failed and we were unable to recover it. 00:37:33.329 [2024-07-15 08:04:24.183745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.183781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.183934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.183967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.184103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.184136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.184365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.184401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.184616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.184651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.184831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.184866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.185079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.185111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.185274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.185306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.185444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.185493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.185646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.185682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.185871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.185911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.186069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.186101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.186294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.186331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.186559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.186595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.186776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.186812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.187011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.187044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.187259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.187308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.187500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.187553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.187737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.187788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.187986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.188020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.188186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.188219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.188407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.188460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.188694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.188732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.188929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.188962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.189124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.189166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.189349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.189386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.189539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.189575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.189734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.189766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.189924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.189957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.190099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.190136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.190346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.190382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.190581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.190616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.190792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.190828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.191053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.191086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.191249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.191286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.191462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.191499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.191703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.191740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.191940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.191989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.192157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.192192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.192420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.192471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.330 qpair failed and we were unable to recover it. 00:37:33.330 [2024-07-15 08:04:24.192647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.330 [2024-07-15 08:04:24.192701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.192830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.192863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.193039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.193072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.193261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.193311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.193491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.193544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.193714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.193749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.193989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.194024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.194166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.194199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.194394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.194427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.194587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.194619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.194782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.194815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.194972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.195006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.195181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.195234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.195453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.195505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.195685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.195736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.195931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.195965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.196159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.196214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.196413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.196464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.196671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.196723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.196899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.196952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.197139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.197191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.197421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.197478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.197631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.197665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.197873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.197918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.198147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.198198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.198409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.198460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.198658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.198692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.198857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.198901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.199033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.199068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.199288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.199344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.199532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.199584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.199750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.199783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.199969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.200025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.200251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.200303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.200519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.200579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.200723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.200757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.200934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.200971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.201180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.201231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.201442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.201491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.201677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.201710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.331 [2024-07-15 08:04:24.201849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.331 [2024-07-15 08:04:24.201887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.331 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.202080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.202132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.202366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.202418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.202590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.202630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.202815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.202853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.203082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.203119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.203290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.203326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.203482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.203530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.203679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.203714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.203872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.203915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.204056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.204088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.204301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.204336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.204491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.204528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.204684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.204721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.204912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.204948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.205138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.205190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.205409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.205461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.205683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.205733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.205861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.205901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.206091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.206143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.206350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.206402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.206620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.206669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.206824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.206857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.207022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.207073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.207222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.207273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.207440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.207492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.207632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.207666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.207833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.207866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.208040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.208091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.208259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.208297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.208460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.208493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.208685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.208718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.208891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.208938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.209136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.209189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.209374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.209412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.209649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.209686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.209862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.209924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.210061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.210093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.210279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.210316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.210474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.210510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.210724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.210760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.332 qpair failed and we were unable to recover it. 00:37:33.332 [2024-07-15 08:04:24.210987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.332 [2024-07-15 08:04:24.211035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.211259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.211312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.211523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.211573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.211786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.211838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.211977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.212011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.212234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.212284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.212467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.212519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.212705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.212744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.212951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.212984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.213146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.213179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.213363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.213399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.213562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.213598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.213848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.213888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.214053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.214090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.214253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.214285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.214446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.214483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.214700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.214737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.214938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.214975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.215132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.215180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.215376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.215430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.215625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.215679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.215851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.215891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.216061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.216095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.216285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.216335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.216557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.216607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.216772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.216805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.216947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.216981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.217140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.217191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.217386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.217442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.217663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.217714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.217938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.217972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.218178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.218228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.218387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.218436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.218612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.218662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.218823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.218856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.219075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.219127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.219336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.219385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.333 [2024-07-15 08:04:24.219597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.333 [2024-07-15 08:04:24.219647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.333 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.219835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.219869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.220091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.220142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.220283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.220334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.220553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.220603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.220770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.220803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.220995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.221048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.221237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.221287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.221502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.221554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.221706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.221922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.221959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.222162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.222213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.222456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.222508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.222647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.222709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.222938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.222971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.223127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.223179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.223374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.223424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.223575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.223608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.223798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.223831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.224005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.224055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.224249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.224279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.224430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.224486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.224652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.224685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.224852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.224892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.225068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.225118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.225300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.225351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.225501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.225710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.225743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.225890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.225924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.226081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.226112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.226244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.226276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.226415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.226452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.226597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.226630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.226778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.226812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.227012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.227046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.227204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.227255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.334 [2024-07-15 08:04:24.227442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.334 [2024-07-15 08:04:24.227479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.334 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.227662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.227694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.227883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.227915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.228119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.228171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.228336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.228388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.228586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.228619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.228782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.228815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.228941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.228974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.229144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.229177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.229344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.229377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.229583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.229616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.229777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.229810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.230031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.230081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.230218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.230250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.230448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.230481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.230643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.230676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.230872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.230916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.231091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.231142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.231300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.231352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.231538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.231571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.231729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.231762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.231955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.231989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.232182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.232233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.232440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.232489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.232656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.232688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.232862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.232902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.233067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.233101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.233267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.233301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.233473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.233511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.233673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.233706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.233838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.233870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.234079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.234111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.234249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.234282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.234465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.234498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.234661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.234695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.234855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.234908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.235084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.235117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.235280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.235331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.235519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.235552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.235713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.235745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.235948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.335 [2024-07-15 08:04:24.235999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.335 qpair failed and we were unable to recover it. 00:37:33.335 [2024-07-15 08:04:24.236127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.236170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.236329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.236362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.236525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.236558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.236728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.236761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.236892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.236924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.237083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.237134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.237311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.237363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.237530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.237563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.237740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.237772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.237952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.238003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.238177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.238210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.238403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.238454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.238621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.238655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.238841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.238874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.239065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.239116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.239308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.239358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.239571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.239621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.239814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.239847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.240036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.240089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.240245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.240285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.240453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.240491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.240719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.240774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.240919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.240952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.241182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.241234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.241391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.241441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.241628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.241680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.241855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.241897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.242077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.242127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.242340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.242389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.242575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.242624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.242796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.242827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.243018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.243069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.243235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.243286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.243495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.243545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.243744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.243777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.244001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.244054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.244230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.244280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.244439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.244493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.244647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.244680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.244840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.244872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.336 [2024-07-15 08:04:24.245065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.336 [2024-07-15 08:04:24.245118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.336 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.245302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.245352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.245516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.245566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.245726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.245759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.245952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.245986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.246136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.246182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.246356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.246390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.246591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.246624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.246774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.246807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.246996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.247030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.247203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.247240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.247423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.247460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.247631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.247667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.247861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.247907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.248211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.248275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.248464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.248500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.248687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.248724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.248910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.248943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.249104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.249137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.249295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.249327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.249510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.249546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.249710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.249749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.249923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.249956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.250123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.250159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.250343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.250380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.250550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.250587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.250793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.250825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.250998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.251030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.251190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.251227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.251398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.251433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.251630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.251666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.251838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.251870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.252040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.252076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.252276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.252312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.252539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.252574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.252762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.252794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.252939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.252972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.253155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.253191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.253360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.253424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.253620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.253656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.253802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.253834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.337 qpair failed and we were unable to recover it. 00:37:33.337 [2024-07-15 08:04:24.254023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.337 [2024-07-15 08:04:24.254060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.254272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.254309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.254508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.254544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.254736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.254769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.254951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.254987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.255151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.255187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.255388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.255424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.255625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.255661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.255862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.255901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.256065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.256101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.256316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.256352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.256565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.256601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.256754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.256788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.256981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.257014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.257166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.257203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.257503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.257560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.257742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.257774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.257928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.257965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.258142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.258179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.258400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.258436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.258630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.258674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.258905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.258941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.259116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.259152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.259329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.259366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.259593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.259661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.259820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.259852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.259999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.260032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.260206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.260239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.260399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.260432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.260564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.260596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.260790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.260822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.260978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.261011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.261149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.261181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.261351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.261383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.261549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.261581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.338 qpair failed and we were unable to recover it. 00:37:33.338 [2024-07-15 08:04:24.261744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.338 [2024-07-15 08:04:24.261777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.261908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.261941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.262127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.262170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.262341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.262373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.262512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.262544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.262697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.262729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.262899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.262933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.263094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.263127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.263291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.263323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.263483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.263516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.263717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.263750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.263912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.263944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.264082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.264115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.264277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.264310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.264467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.264499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.264631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.264663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.264844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.264887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.265068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.265101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.265294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.265327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.265463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.265496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.265667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.265700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.265890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.265941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.266077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.266110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.266276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.266308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.266449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.266482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.266650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.266686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.266869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.266929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.267083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.267116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.267311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.267344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.267527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.267559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.267694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.267727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.267912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.267945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.268073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.268106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.268234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.268266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.268429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.268461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.268643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.268675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.268810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.268843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.269010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.269043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.269199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.269232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.269423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.269455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.269588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.339 [2024-07-15 08:04:24.269621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.339 qpair failed and we were unable to recover it. 00:37:33.339 [2024-07-15 08:04:24.269769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.269802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.269944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.269977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.270157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.270190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.270348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.270381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.270570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.270603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.270816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.270852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.271019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.271052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.271218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.271251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.271419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.271451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.271610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.271642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.271852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.271908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.272100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.272134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.272272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.272304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.272466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.272499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.272689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.272722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.272919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.272952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.273115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.273147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.273291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.273325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.273510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.273542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.273692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.273725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.273918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.274121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.274154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.274316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.274349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.274547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.274580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.274763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.274805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.274955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.274998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.275214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.275283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.275517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.275554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.275755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.275792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.275988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.276025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.276186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.276223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.276424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.276461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.276657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.276693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.276845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.276886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.277044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.277081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.277291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.277327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.277522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.277558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.277704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.277736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.277933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.277966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.278110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.278143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.340 [2024-07-15 08:04:24.278301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.340 [2024-07-15 08:04:24.278333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.340 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.278501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.278534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.278697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.278730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.278902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.278935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.279078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.279111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.279242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.279274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.279434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.279466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.279620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.279653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.279808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.279841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.279983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.280016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.280154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.280187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.280348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.280381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.280507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.280540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.280728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.280761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.280922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.280955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.281118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.281150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.281318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.281350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.281503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.281535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.281692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.281724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.281903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.281936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.282120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.282152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.282280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.282313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.282437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.282470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.282630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.282662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.282805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.282842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.283015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.283049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.283209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.283241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.283431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.283463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.283602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.283635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.283760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.283792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.283926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.283958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.284149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.284182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.284339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.284372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.284524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.284556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.284684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.284716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.284889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.284922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.285060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.285092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.285279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.285312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.285442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.285475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.285633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.285665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.285853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.285893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.341 qpair failed and we were unable to recover it. 00:37:33.341 [2024-07-15 08:04:24.286058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.341 [2024-07-15 08:04:24.286090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.286220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.286252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.286423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.286456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.286618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.286651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.286818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.286851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.286999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.287032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.287195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.287227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.287386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.287419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.287587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.287630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.287791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.287824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.287969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.288003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.288173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.288205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.288340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.288373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.288534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.288567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.288759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.288795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.289008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.289041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.289169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.289202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.289356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.289388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.289556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.289589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.289775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.289807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.289972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.290005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.290131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.290163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.290302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.290334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.290524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.290561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.290682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.290715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.290847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.290884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.291038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.291071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.291257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.291290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.291429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.291462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.291651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.291684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.291897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.291946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.292109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.292141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.292327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.292359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.292521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.292553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.292744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.292776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.342 [2024-07-15 08:04:24.292909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.342 [2024-07-15 08:04:24.292943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.342 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.293100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.293132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.293289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.293321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.293473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.293506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.293690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.293723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.293859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.293897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.294086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.294119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.294250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.294283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.294441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.294474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.294629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.294661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.294849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.294886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.295056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.295088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.295251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.295283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.295412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.295444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.295603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.295636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.295804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.295837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.296009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.296042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.296212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.296245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.296377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.296409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.296571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.296603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.296770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.296804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.296965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.296998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.297130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.297162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.297328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.297360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.297521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.297554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.297737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.297769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.297934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.297968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.298136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.298169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.298350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.298387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.298550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.298582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.298750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.298783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.298943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.298976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.299113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.299146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.299334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.299366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.299499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.299532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.299719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.299752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.299905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.299938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.300071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.300113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.300271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.300304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.300473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.300506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.300695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.300728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.343 qpair failed and we were unable to recover it. 00:37:33.343 [2024-07-15 08:04:24.300914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.343 [2024-07-15 08:04:24.300948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.301143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.301176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.301368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.301401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.301561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.301593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.301756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.301789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.301920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.301953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.302119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.302151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.302309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.302342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.302526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.302558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.302744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.302776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.302973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.303006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.303170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.303202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.303358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.303390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.303549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.303581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.303807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.303841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.304036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.304069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.304207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.304239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.304401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.304434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.304598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.304630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.304765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.304797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.304961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.304995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.305123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.305155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.305297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.305330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.305514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.305547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.305710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.305746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.305911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.305944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.306112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.306145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.306306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.306342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.306504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.306536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.306723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.306755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.306889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.306922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.307060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.307092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.307253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.307441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.307473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.307669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.307701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.307892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.307925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.308090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.308123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.308251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.308284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.308447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.308479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.308637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.308670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.308861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.344 [2024-07-15 08:04:24.308901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.344 qpair failed and we were unable to recover it. 00:37:33.344 [2024-07-15 08:04:24.309068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.309100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.309239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.309272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.309456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.309488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.309616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.309648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.309806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.309839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.310022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.310054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.310211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.310243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.310386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.310418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.310577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.310609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.310770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.310803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.310949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.310981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.311150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.311183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.311344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.311376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.311541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.311574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.311707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.311740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.311900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.311933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.312121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.312153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.312287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.312319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.312472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.312504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.312661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.312703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.312893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.312926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.313087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.313120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.313304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.313336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.313495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.313528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.313684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.313716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.313885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.313918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.314083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.314120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.314308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.314340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.314468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.314500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.314693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.314725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.314886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.314918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.315075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.315107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.315242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.315275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.315465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.315498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.315629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.315661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.315845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.315897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.316051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.316083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.316279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.316312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.316480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.316512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.316692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.316728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.345 [2024-07-15 08:04:24.316890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.345 [2024-07-15 08:04:24.316923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.345 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.317098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.317300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.317332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.317494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.317526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.317694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.317727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.317899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.317932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.318063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.318099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.318264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.318297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.318421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.318453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.318622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.318655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.318811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.318843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.319009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.319041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.319203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.319383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.319416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.319569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.319601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.319786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.319822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.320012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.320044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.320214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.320246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.320436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.320468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.320597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.320629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.320817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.320853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.321051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.321085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.321247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.321279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.321432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.321465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.321647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.321683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.321910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.321966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.322091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.322128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.322287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.322319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.322517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.322550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.322682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.322714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.322884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.322916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.323050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.323083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.323241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.323273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.323465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.323497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.323630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.323663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.346 [2024-07-15 08:04:24.323824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.346 [2024-07-15 08:04:24.323856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.346 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.324035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.324068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.324234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.324266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.324402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.324434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.324601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.324651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.324835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.324867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.325063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.325095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.325236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.325268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.325434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.325476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.325606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.325639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.325775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.325808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.325968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.326001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.326159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.326191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.326326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.326359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.326546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.326579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.326730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.326762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.326936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.326972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.327145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.327181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.327377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.327413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.327639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.327675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.327907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.327956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.328114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.328146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.328301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.328333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.328476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.328508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.328669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.328701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.328838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.328870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.329026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.329059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.329194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.329227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.329388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.329421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.329587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.329620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.329804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.329836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.330004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.330041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.330230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.330262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.330445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.330477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.330641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.330673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.330837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.330869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.331041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.331073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.331200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.331232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.331420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.331453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.331573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.331606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.331792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.347 [2024-07-15 08:04:24.331827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.347 qpair failed and we were unable to recover it. 00:37:33.347 [2024-07-15 08:04:24.331997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.332030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.332191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.332223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.332377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.332409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.332575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.332608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.332820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.332855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.333069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.333105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.333390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.333446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.333645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.333681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.333861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.333905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.334111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.334143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.334308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.334341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.334524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.334557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.334722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.334754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.334917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.334950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.335101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.335133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.335325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.335358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.335501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.335533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.335697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.335729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.335899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.335933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.336097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.336129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.336269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.336301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.336459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.336491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.336650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.336682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.336841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.336874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.337051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.337083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.337234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.337266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.337395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.337428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.337588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.337621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.337755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.337787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.337974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.338007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.338174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.338206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.338398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.338440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.338602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.338635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.338820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.338852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.338994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.339027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.339189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.339221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.339406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.339438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.339596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.339628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.339821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.339853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.339997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.340030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.340192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.348 [2024-07-15 08:04:24.340225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.348 qpair failed and we were unable to recover it. 00:37:33.348 [2024-07-15 08:04:24.340381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.340413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.340544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.340576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.340766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.340799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.340937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.341162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.341194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.341380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.341413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.341601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.341633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.341804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.341838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.342015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.342049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.342204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.342236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.342417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.342450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.342634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.342667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.342805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.342837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.342999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.343032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.343197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.343395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.343427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.343571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.343608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.343793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.343829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.343993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.344025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.344193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.344225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.344389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.344421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.344579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.344611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.344803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.344835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.345002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.345035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.345169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.345201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.345389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.345422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.345581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.345613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.345799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.345834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.346025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.346061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.346260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.346296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.346480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.346527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.346707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.346744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.346924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.346957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.347116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.347149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.347337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.347370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.347537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.347569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.347710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.347742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.347935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.347968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.348163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.348195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.348352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.348384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.349 [2024-07-15 08:04:24.348515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.349 [2024-07-15 08:04:24.348547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.349 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.348711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.348743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.348888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.348920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.349087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.349301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.349334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.349518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.349550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.349714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.349746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.349941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.349974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.350104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.350137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.350326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.350358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.350498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.350531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.350668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.350700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.350923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.350956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.351117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.351150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.351287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.351329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.351468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.351501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.351664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.351700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.351831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.351863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.352001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.352034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.352168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.352201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.352338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.352370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.352534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.352567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.352697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.352730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.352903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.352936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.353101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.353133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.353293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.353325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.353509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.353541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.353753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.353788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.353991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.354024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.354150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.354182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.354373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.354406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.354561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.354593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.354764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.354797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.354959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.354992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.355154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.355186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.355316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.355348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.355505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.355537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.355696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.355728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.355888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.355921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.356061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.356093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.356278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.356310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.350 qpair failed and we were unable to recover it. 00:37:33.350 [2024-07-15 08:04:24.356469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.350 [2024-07-15 08:04:24.356501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.356686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.356719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.356910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.356943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.357128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.357160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.357316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.357348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.357542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.357574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.357699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.357732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.357893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.357925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.358049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.358081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.358267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.358299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.358429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.358461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.358585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.358618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.358783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.358816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.358949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.358982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.359177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.359209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.359362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.359399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.359557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.359589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.359716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.359748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.359885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.359918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.360109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.360142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.360298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.360331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.360496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.360528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.360696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.360728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.360891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.360924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.361086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.361118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.361245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.361277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.361445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.361477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.361602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.361634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.361800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.361833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.362007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.362039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.362174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.362207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.362363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.362395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.362546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.362578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.362759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.362795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.362949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.362982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.351 [2024-07-15 08:04:24.363135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.351 [2024-07-15 08:04:24.363167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.351 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.363331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.363363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.363530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.363573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.363755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.363792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.363965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.363998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.364136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.364168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.364324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.364356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.364551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.364583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.364739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.364772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.364935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.364967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.365105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.365138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.365296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.365329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.365454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.365486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.365619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.365651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.365836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.365869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.366030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.366062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.366254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.366286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.366445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.366478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.366662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.366694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.366907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.366956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.367148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.367186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.367370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.367402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.367562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.367594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.367734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.367766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.367900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.367933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.368064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.368097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.368259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.368291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.368475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.368507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.368638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.368671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.368805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.368837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.369007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.369040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.369173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.369205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.369333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.369365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.369490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.369522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.369683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.369715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.369909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.369942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.370076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.370108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.370262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.370294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.370482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.370514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.370654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.370686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.370846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.352 [2024-07-15 08:04:24.370883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.352 qpair failed and we were unable to recover it. 00:37:33.352 [2024-07-15 08:04:24.371029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.371062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.371218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.371250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.371434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.371466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.371600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.371632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.371794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.371826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.371975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.372008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.372145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.372183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.372356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.372390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.372515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.372548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.372686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.372719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.372855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.372893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.373067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.373259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.373293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.373478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.373511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.373671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.373704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.373864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.373904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.374033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.374065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.374244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.374276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.374438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.374470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.374636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.374839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.374871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.375017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.375049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.375219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.375253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.375438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.375470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.375617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.375650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.375857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.375928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.376109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.376142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.376312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.376344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.376507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.376540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.376696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.376729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.376903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.376936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.377130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.377162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.377318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.377350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.377519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.377551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.377708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.377740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.377881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.377916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.378079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.378111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.378237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.378270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.378458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.378491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.378654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.378687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.378850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.353 [2024-07-15 08:04:24.378887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.353 qpair failed and we were unable to recover it. 00:37:33.353 [2024-07-15 08:04:24.379019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.379053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.379216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.379250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.379451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.379484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.379649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.379681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.379810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.379842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.380000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.380033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.380196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.380229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.380392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.380425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.380575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.380607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.380805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.380841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.381046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.381079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.381242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.381274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.381430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.381462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.381598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.381630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.381837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.381873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.382062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.382099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.382326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.382384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.382583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.382619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.382856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.382905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.383112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.383145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.383307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.383339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.383478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.383509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.383672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.383704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.383870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.383909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.384034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.384067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.384223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.384256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.384393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.384426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.384588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.384621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.384776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.384809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.384999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.385033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.385194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.385226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.385392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.385425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.385591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.385623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.385781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.385817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.386000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.386033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.386218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.386250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.386411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.386443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.386602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.386634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.386843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.386885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.387045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.387077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.354 [2024-07-15 08:04:24.387273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.354 [2024-07-15 08:04:24.387306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.354 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.387515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.387551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.387756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.387791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.387996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.388033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.388198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.388233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.388390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.388442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.388671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.388706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.388894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.388943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.389102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.389144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.389309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.389342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.389501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.389534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.389717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.389749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.389911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.389944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.390107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.390140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.390270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.390303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.390493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.390526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.390682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.390714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.390888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.390921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.391111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.391147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.391307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.391339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.391469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.391501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.391637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.391669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.391824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.391856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.392023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.392056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.392252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.392284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.392426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.392458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.392626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.392659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.392819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.392851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.393045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.393078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.393235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.393267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.393468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.393500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.393664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.393696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.393860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.393900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.394088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.394121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.394274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.394307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.355 [2024-07-15 08:04:24.394436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.355 [2024-07-15 08:04:24.394468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.355 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.394635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.394668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.394854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.394892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.395016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.395048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.395244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.395277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.395437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.395469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.395629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.395662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.395868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.395911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.396096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.396128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.396293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.396325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.396493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.396526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.396662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.396694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.396859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.396907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.397084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.397117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.397282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.397315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.397474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.397506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.397691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.397723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.397849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.397888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.398076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.398277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.398309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.398493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.398525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.398688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.398723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.398934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.398967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.399104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.399141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.399296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.399329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.399519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.399552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.399708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.399740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.399936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.399968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.400106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.400139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.400323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.400355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.400487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.400518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.400672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.400704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.400863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.400902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.401068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.401100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.401241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.401274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.401427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.401459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.401642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.401674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.401847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.401896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.402022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.402055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.402246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.402278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.402435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.356 [2024-07-15 08:04:24.402468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.356 qpair failed and we were unable to recover it. 00:37:33.356 [2024-07-15 08:04:24.402625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.402657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.402824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.402856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.403021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.403054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.403226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.403258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.403422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.403454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.403644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.403676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.403803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.403836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.404002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.404034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.404190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.404223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.404386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.404418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.404597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.404629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.404791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.404827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.405003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.405036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.405220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.405252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.405452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.405485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.405648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.405680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.405838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.405870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.406039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.406071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.406216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.406248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.406431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.406464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.406623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.406656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.406865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.406937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.407105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.407142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.407311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.407343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.407476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.407508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.407641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.407673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.407809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.407841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.408011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.408043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.408205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.408238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.408400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.408432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.408617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.408650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.408780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.408812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.408973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.409005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.409160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.409193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.409349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.409380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.409515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.409548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.409728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.409763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.409942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.409974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.410131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.410163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.410317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.410350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.410537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.357 [2024-07-15 08:04:24.410569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.357 qpair failed and we were unable to recover it. 00:37:33.357 [2024-07-15 08:04:24.410727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.410759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.410936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.410969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.411126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.411158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.411316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.411348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.411531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.411562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.411700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.411732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.411860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.411898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.412089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.412121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.412264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.412297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.412452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.412485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.412671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.412703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.412831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.412863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.413011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.413044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.413230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.413262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.413418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.413450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.413650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.413682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.413850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.413890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.414054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.414088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.414232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.414265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.414424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.414466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.414651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.414684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.414863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.414911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.415089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.415121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.415257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.415290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.415449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.415481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.415667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.415699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.415862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.415900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.416063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.416095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.416252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.416284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.416426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.416457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.416583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.416615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.416783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.416815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.416976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.417008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.417167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.417199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.417322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.417354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.417519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.417551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.417767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.417803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.418015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.418048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.418209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.418241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.418429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.418462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.358 [2024-07-15 08:04:24.418624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.358 [2024-07-15 08:04:24.418655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.358 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.418817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.418849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.418989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.419022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.419211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.419243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.419404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.419436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.419598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.419630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.419836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.419872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.420079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.420115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.420320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.420356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.420556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.420592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.420744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.420794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.421003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.421040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.421234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.421270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.421458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.421494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.421665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.421701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.421947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.421980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.422118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.422150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.422314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.422346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.422533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.422565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.422724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.422756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.422900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.422933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.423089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.423128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.423266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.423298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.423464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.423501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.423662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.423694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.423854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.423891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.424056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.424089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.424254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.424287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.424445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.424477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.424635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.424667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.424790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.424823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.425025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.425058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.425218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.425250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.425435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.425467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.425654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.425685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.359 [2024-07-15 08:04:24.425856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.359 [2024-07-15 08:04:24.425896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.359 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.426060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.426092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.426273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.426305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.426437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.426470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.426628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.426660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.426847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.426891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.427056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.427088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.427274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.427306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.427434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.427496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.427677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.427713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.427919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.427952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.428125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.428157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.428321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.428353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.428542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.428574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.428759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.428791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.428954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.428987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.429116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.429148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.429288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.429320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.429493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.429525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.429686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.429718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.429856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.429905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.430095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.430127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.430282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.430314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.430485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.430517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.430705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.430737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.430892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.430925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.431112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.431149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.431319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.431351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.431530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.431563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.431759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.431791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.431925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.431958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.432153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.432185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.432340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.432372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.432500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.432532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.432686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.432719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.432907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.432940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.433093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.433125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.433287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.433319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.433480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.433511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.433692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.433729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.433912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.433944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.360 [2024-07-15 08:04:24.434111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.360 [2024-07-15 08:04:24.434143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.360 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.434304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.434337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.434471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.434504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.434639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.434671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.434802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.434835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.434995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.435028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.435187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.435220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.435354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.435386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.435569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.435602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.435731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.435763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.435906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.435938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.436067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.436099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.436298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.436347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.436490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.436526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.436704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.436752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.436924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.436961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.437129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.437163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.437322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.437356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.437495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.437529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.437665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.437697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.437829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.437861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.438029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.438061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.438219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.438251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.438440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.438472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.438630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.438662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.438828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.438860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.439028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.439060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.439221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.439253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.439417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.439449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.439586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.439618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.439774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.439806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.439964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.439997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.440151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.440183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.440353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.440385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.440520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.440552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.440685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.440718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.440873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.440929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.441132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.441168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.441336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.441374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.441541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.441573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.441734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.361 [2024-07-15 08:04:24.441777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.361 qpair failed and we were unable to recover it. 00:37:33.361 [2024-07-15 08:04:24.441942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.441975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.442108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.442140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.442327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.442359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.442489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.442521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.442683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.442715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.442906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.442939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.443094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.443126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.443263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.443295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.443458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.443491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.443645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.443677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.443860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.443901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.444036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.444074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.444256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.444305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.444485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.444521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.444699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.444734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.444900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.444936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.445120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.445154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.445323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.445358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.445517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.445550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.445714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.445746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.445909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.445942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.446103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.446136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.446274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.446306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.446495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.446528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.446771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.446810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.447006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.447040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.447211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.447246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.447431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.447465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.447634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.447668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.447854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.447902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.448112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.448145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.448335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.448368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.448516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.448549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.448736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.448770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.448909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.448943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.449111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.449145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.449307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.449341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.449556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.449602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.449758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.449793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.449971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.450019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.450183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.450218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.450385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.450419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.450606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.362 [2024-07-15 08:04:24.450639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.362 qpair failed and we were unable to recover it. 00:37:33.362 [2024-07-15 08:04:24.450794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.450832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.451024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.451058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.451242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.451275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.451442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.451479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.451663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.451697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.451859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.451905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.452064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.452098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.452294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.452328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.452492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.452529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.452663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.452696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.452863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.452906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.453071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.453104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.453294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.453327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.453463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.453496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.453681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.453713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.453868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.453910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.454101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.454134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.454293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.454326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.454483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.454515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.454721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.454757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.454937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.454971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.455153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.455199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.455377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.455412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.455575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.455608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.455791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.455824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.455985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.456019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.456152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.456184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.456338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.456370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.456535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.456567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.456752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.456785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.456970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.457018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.457171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.457206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.457393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.457426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.457598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.457632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.457773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.457816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.457962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.457996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.458161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.458195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.458356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.363 [2024-07-15 08:04:24.458389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.363 qpair failed and we were unable to recover it. 00:37:33.363 [2024-07-15 08:04:24.458540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.458573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.458755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.458787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.458941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.458976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.459140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.459173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.459337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.459370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.459530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.459563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.459744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.459780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.459976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.460023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.460172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.460208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.460373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.460405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.460566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.460606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.460797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.460830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.461027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.461086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.461262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.461297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.461485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.461518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.461678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.461710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.461899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.461932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.462064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.462096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.462246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.462293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.462431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.462466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.462666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.462699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.462861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.462901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.463063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.463095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.463282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.463315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.463479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.463512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.463762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.463819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.464017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.464050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.464205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.464238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.464401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.464456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.464599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.464632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.464812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.464848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.465042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.465074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.465208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.465240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.465364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.465396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.465521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.465571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.465781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.465817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.466012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.466049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.466294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.466348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.466566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.466605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.466820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.466859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.467043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.467080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.467254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.467293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.467537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.467598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.467797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.467834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.364 qpair failed and we were unable to recover it. 00:37:33.364 [2024-07-15 08:04:24.468066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.364 [2024-07-15 08:04:24.468115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.468297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.468334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.468511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.468547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.468724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.468760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.468964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.468997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.469152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.469184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.469342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.469380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.469569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.469602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.469726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.469759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.469922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.469955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.470113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.470156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.470317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.470350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.470536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.470569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.470749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.470785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.470993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.471040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.471214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.471251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.471397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.471432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.471568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.471603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.471769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.471803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.472012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.472060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.472206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.472398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.472431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.472616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.472648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.472839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.472883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.473087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.473256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.473289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.473454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.473487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.473650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.473683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.473857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.473937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.474119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.474166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.474333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.474367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.474555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.474589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.474760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.474793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.474954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.474990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.475148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.475181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.475345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.475379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.475544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.475587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.475770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.475807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.476038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.476087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.476259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.476296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.476472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.476508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.476785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.476845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.477047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.477083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.477251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.477284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.477495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.477558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.477800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.477838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.365 [2024-07-15 08:04:24.478026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.365 [2024-07-15 08:04:24.478066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.365 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.478252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.478289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.478448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.478483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.478658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.478696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.478897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.478948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.479134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.479170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.479384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.479423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.479594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.479632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.479838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.479892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.480125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.480162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.480318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.480369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.480576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.480615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.480821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.480857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.481071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.481109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.481348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.481382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.481577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.481614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.481848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.481894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.482061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.482094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.482263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.482297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.482488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.482521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.482685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.482718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.482849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.482889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.483082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.483116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.483306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.483339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.483533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.483567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.483728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.483761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.483957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.483992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.484138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.484172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.484335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.484368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.484535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.484568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.484767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.484804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.484972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.485007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.485169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.485202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.485330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.485364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.485566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.485599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.485767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.485800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.485963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.485997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.486156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.486189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.486364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.486398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.486560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.486593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.486775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.486817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.487004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.487038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.487219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.487266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.487409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.487443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.487608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.487641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.487799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.487832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.488022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.488059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.488252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.366 [2024-07-15 08:04:24.488288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.366 qpair failed and we were unable to recover it. 00:37:33.366 [2024-07-15 08:04:24.488508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.488545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.488743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.488780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.488974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.489011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.489201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.489237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.489378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.489429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.489633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.489669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.489851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.489897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.490086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.490120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.490320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.490356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.490514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.490550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.490714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.490749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.490957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.490990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.491118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.491167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.491384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.491420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.491621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.491657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.491833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.491869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.492103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.492138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.492342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.492375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.492532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.492565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.492707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.492739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.492900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.492933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.493157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.493206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.493378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.493423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.493592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.493626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.493798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.493831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.494003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.494037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.494196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.494230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.494393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.494427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.494573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.494607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.494759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.494794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.495016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.495064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.495274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.495321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.495473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.495514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.495675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.495717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.495899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.495933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.496097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.496130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.496290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.496323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.496551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.496713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.496747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.496888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.496934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.497145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.497192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.497342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.367 [2024-07-15 08:04:24.497377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.367 qpair failed and we were unable to recover it. 00:37:33.367 [2024-07-15 08:04:24.497532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.497565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.497727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.497760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.497923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.497956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.498106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.498138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.498294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.498326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.498487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.498520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.498678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.498710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.498867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.498906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.499072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.499105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.499261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.499293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.499480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.499513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.499649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.499682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.499840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.499873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.500030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.500062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.500222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.500255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.500391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.500424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.500562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.500594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.500761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.500794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.500920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.500954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.501079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.501111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.501267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.501300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.501431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.501463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.501591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.501637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.501771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.501804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.501987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.502020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.502150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.502182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.502306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.502339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.502492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.502524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.502709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.502742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.502937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.502970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.503135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.503174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.503366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.503398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.503587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.503619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.503782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.503814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.503978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.504011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.504144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.504176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.504310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.504343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.504473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.504505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.504672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.504705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.504891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.504924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.505086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.505119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.505309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.505341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.505501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.505534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.505695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.505728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.505896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.505929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.506104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.506152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.506317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.506352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.506521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.506556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.506678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.368 [2024-07-15 08:04:24.506711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.368 qpair failed and we were unable to recover it. 00:37:33.368 [2024-07-15 08:04:24.506842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.506874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.507069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.507102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.507261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.507293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.507417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.507450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.507591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.507624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.507812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.507844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.508015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.508048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.508236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.508269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.508404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.508436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.508578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.508610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.508823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.508860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.509048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.509081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.509269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.509301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.509467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.509500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.509660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.509692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.509854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.509893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.510060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.510092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.510227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.510259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.510413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.510445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.510607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.510640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.510806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.510838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.510997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.511034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.511189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.511222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.511381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.511413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.511551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.511584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.511713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.511745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.511905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.511938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.512105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.512139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.512317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.512350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.512513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.512545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.512683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.512716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.512853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.512893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.513049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.513082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.513243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.513276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.513470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.513502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.513695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.513728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.513862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.513902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.514100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.514258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.514290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.514417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.514449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.514613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.514656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.514821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.514854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.515014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.515047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.515213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.515246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.515406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.515438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.515621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.515653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.515814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.515846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.516019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.516052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.516215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.516248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.516384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.516416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.516600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.369 [2024-07-15 08:04:24.516632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.369 qpair failed and we were unable to recover it. 00:37:33.369 [2024-07-15 08:04:24.516765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.516798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.516939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.516972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.517135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.517168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.517296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.517328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.517520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.517553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.517714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.517747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.517907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.517940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.518083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.518131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.518305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.518340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.518478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.518512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.518673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.518711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.518874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.518923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.519086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.519119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.519281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.519313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.519469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.519502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.519665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.519698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.519858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.519897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.520032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.520064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.520230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.520263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.520427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.520459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.520619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.520652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.520841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.520874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.521022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.521055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.521181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.521213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.521402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.521435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.521585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.521617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.521765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.521801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.521954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.521987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.522140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.522173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.522330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.522368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.522565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.522601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.522743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.522776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.522967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.523003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.524341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.524385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.524575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.524619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.524763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.524797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.524959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.524993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.525145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.525194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.525370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.525406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.525586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.525620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.525813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.525847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.525999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.526032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.526196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.526229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.526394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.526428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.526595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.526628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.526786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.526819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.526984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.527018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.527174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.527207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.527377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.527411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.527595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.527629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.527782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.370 [2024-07-15 08:04:24.527825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.370 qpair failed and we were unable to recover it. 00:37:33.370 [2024-07-15 08:04:24.528040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.528078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.528294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.528331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.528507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.528546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.528776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.528814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.529726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.529769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.529967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.530003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.530194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.530227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.530416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.530454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.530664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.530702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.530882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.530921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.531105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.531139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.531291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.531325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.531492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.531526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.531711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.531745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.531931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.531966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.532130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.532164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.532332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.532366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.532501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.532535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.532703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.532738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.532910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.532945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.533110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.533143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.533305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.533340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.533505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.533539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.533704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.533737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.533891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.533934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.371 [2024-07-15 08:04:24.534103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.371 [2024-07-15 08:04:24.534138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.371 qpair failed and we were unable to recover it. 00:37:33.655 [2024-07-15 08:04:24.534336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.655 [2024-07-15 08:04:24.534370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.655 qpair failed and we were unable to recover it. 00:37:33.655 [2024-07-15 08:04:24.534556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.655 [2024-07-15 08:04:24.534591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.655 qpair failed and we were unable to recover it. 00:37:33.655 [2024-07-15 08:04:24.534755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.655 [2024-07-15 08:04:24.534789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.655 qpair failed and we were unable to recover it. 00:37:33.655 [2024-07-15 08:04:24.534959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.655 [2024-07-15 08:04:24.534998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.655 qpair failed and we were unable to recover it. 00:37:33.655 [2024-07-15 08:04:24.535139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.655 [2024-07-15 08:04:24.535176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.535380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.535417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.535616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.535653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.535804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.535840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.536034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.536068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.536241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.536274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.536442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.536474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.536639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.536672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.536838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.536871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.537064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.537102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.537375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.537435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.537717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.537777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.537962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.537996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.538154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.538206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.538391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.538446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.538590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.538840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.538886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.539048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.539081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.539247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.539280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.540687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.540732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.540948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.540984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.541127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.541176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.541359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.541396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.541637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.541674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.541849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.541892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.542079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.542112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.542319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.542353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.542532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.542569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.542772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.542805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.542943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.542977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.543141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.543193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.543379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.543429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.543612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.543648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.543826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.543863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.544031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.544064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.544196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.544228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.544411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.544465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.544650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.544690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.544901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.544936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.545075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.545107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.545267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.656 [2024-07-15 08:04:24.545304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.656 qpair failed and we were unable to recover it. 00:37:33.656 [2024-07-15 08:04:24.545527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.545560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.545786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.545824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.545997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.546030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.546167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.546203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.546412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.546448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.546659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.546698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.546858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.546904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.547038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.547071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.547255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.547288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.547483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.547519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.547724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.547762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.547931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.547965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.548129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.548162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.548368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.548403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.548595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.548645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.548854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.548895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.549041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.549075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.549265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.549302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.549450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.549501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.549700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.549737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.549918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.549951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.550108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.550144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.550351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.550388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.550557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.550593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.550778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.550810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.551010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.551049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.551271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.551326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.551481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.551517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.551688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.551722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.551852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.551893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.552068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.552105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.552361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.552398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.552604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.552640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.552818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.552851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.553015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.553051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.553276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.553321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.553526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.553562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.553716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.553748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.553929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.553962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.554130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.554197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.657 [2024-07-15 08:04:24.554414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.657 [2024-07-15 08:04:24.554454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.657 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.554629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.554662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.554818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.554852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.555090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.555128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.555362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.555416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.555939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.555976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.556175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.556218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.556708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.556748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.556923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.556958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.557174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.557231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.557388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.557439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.557656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.557688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.557886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.557923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.558111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.558174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.558380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.558419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.558638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.558671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.558816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.558849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.559005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.559042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.559237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.559274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.559428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.559480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.559668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.559701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.559888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.559940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.560136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.560172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.560403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.560455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.560743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.560801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.561001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.561034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.561211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.561254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.561506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.561542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.561724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.561757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.561927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.561960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.562154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.562215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.562554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.562595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.562803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.562848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.562997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.563032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.563182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.563219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.563431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.563497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.563707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.563745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.563920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.563963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.564140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.564193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.564461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.564522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.564689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.658 [2024-07-15 08:04:24.564723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.658 qpair failed and we were unable to recover it. 00:37:33.658 [2024-07-15 08:04:24.564917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.564955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.565131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.565167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.565416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.565481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.565800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.565864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.566041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.566074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.566236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.566290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.566519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.566557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.566736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.566769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.566959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.566993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.567146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.567182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.567349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.567385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.567590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.567626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.567803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.567835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.568005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.568054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.568269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.568310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.568600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.568660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.568816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.568849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.569041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.569080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.569303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.569341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.569682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.569738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.569942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.569978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.570144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.570182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.570397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.570452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.570764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.570819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.571017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.571051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.571211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.571248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.571506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.571543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.571728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.571772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.571950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.571984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.572173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.572219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.572460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.572497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.572695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.572743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.572941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.572975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.573137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.573179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.573394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.573438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.573657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.573694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.573873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.573916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.574093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.574130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.574365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.574416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.659 [2024-07-15 08:04:24.574696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.659 [2024-07-15 08:04:24.574753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.659 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.574970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.575007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.575210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.575258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.575510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.575547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.575723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.575756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.575896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.575950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.576148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.576190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.576426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.576463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.576703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.576741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.576948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.576985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.577159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.577196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.577425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.577462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.577665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.577700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.577900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.577938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.578145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.578183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.578410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.578448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.578646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.578682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.578857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.578903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.579058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.579095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.579314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.579350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.579610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.579667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.579835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.579888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.580047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.580084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.580294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.580331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.580523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.580560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.580725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.580759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.580958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.580996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.581217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.581254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.581473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.581510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.581703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.581737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.581904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.581958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.582191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.582228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.582406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.582444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.582617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.582650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.660 [2024-07-15 08:04:24.582838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.660 [2024-07-15 08:04:24.582884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.660 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.583083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.583126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.583344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.583377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.583628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.583683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.583894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.583928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.584118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.584154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.584355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.584393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.584628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.584681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.584861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.584903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.585071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.585105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.585323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.585360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.585563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.585600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.585783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.585817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.586008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.586046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.586283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.586319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.586542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.586611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.586835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.586872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.587068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.587101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.587293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.587335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.587534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.587571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.587750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.587796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.587948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.587983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.588145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.588192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.588407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.588472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.588717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.588772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.588988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.589022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.589173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.589207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.589384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.589417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.589593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.589626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.589787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.589820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.590046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.590182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.590214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.590375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.590408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.590572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.590615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.590798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.590834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.591009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.591057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.591277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.591535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.591570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.591736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.591780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.591931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.591967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.592135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.661 [2024-07-15 08:04:24.592169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.661 qpair failed and we were unable to recover it. 00:37:33.661 [2024-07-15 08:04:24.592357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.592394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.592571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.592604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.592739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.592772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.592936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.592971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.593137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.593181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.593337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.593370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.593557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.593590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.593777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.593811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.593988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.594022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.594152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.594195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.594363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.594397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.594562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.594595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.594779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.594820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.595018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.595052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.595214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.595250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.595390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.595425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.595601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.595634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.595855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.595921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.596097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.596132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.596303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.596336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.596504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.596537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.596679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.596712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.596875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.596923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.597063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.597096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.597249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.597281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.597454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.597487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.597626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.597659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.597829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.597871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.598026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.598060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.598225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.598259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.598435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.598470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.598639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.598673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.598857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.598920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.599066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.599100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.599280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.599314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.599472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.599505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.599695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.599728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.599896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.599930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.600060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.600092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.600247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.600279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.662 [2024-07-15 08:04:24.600466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.662 [2024-07-15 08:04:24.600503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.662 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.600667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.600710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.600872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.600929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.601126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.601159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.601353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.601385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.601545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.601579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.601778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.601814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.602037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.602070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.602232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.602267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.602464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.602497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.602632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.602670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.602884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.602921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.603130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.603163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.603338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.603371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.603530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.603564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.603709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.603752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.603946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.603979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.604113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.604145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.604315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.604347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.604546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.604578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.604742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.604774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.604938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.604973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.605134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.605171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.605309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.605341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.605501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.605534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.605698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.605730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.605902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.605936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.606076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.606109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.606269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.606301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.606465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.606498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.606683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.606716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.606841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.606884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.607072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.607104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.607285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.607319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.607479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.607511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.607713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.607746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.607913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.607946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.608105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.608137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.608304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.608349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.608527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.608559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.608718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.608755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.663 qpair failed and we were unable to recover it. 00:37:33.663 [2024-07-15 08:04:24.608893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.663 [2024-07-15 08:04:24.608931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.609106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.609139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.609315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.609348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.609502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.609535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.609679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.609712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.609872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.609910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.610045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.610079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.610254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.610287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.610422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.610454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.610619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.610652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.610801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.610834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.611014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.611047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.611178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.611211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.611403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.611436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.611591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.611624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.611817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.611852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.612025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.612058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.612220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.612253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.612439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.612471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.612626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.612660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.612845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.612900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.613110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.613143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.613309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.613342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.613477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.613510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.613668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.613711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.613898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.613932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.614095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.614129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.614270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.614303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.614498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.614531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.614691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.614723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.614886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.614919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.615082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.615115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.615277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.615309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.615470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.615647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.615679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.615884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.664 [2024-07-15 08:04:24.615916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.664 qpair failed and we were unable to recover it. 00:37:33.664 [2024-07-15 08:04:24.616100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.616133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.616307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.616534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.616566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.616727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.616763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.616906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.616945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.617099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.617132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.617306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.617339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.617517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.617550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.617719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.617752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.617928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.617962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.618151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.618190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.618382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.618415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.618548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.618581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.618771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.618812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.618986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.619019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.619179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.619211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.619355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.619388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.619555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.619589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.619758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.619791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.619967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.620001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.620162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.620205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.620377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.620410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.620566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.620599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.620762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.620794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.620929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.620963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.621153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.621186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.621362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.621394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.621577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.621609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.621770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.621803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.621941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.621975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.622131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.622179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.622367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.622403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.622541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.622575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.622750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.622784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.622951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.622985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.623179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.623212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.623374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.623416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.623605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.623638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.623797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.623830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.624005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.624040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.624212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.665 [2024-07-15 08:04:24.624252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.665 qpair failed and we were unable to recover it. 00:37:33.665 [2024-07-15 08:04:24.624422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.624456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.624644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.624676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.624851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.624899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.625091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.625125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.625273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.625306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.625470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.625504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.625697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.625731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.625928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.625964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.626157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.626190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.626351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.626386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.626586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.626623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.626817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.626849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.627018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.627051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.627210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.627242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.627417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.627450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.627619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.627652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.627823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.627855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.628027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.628060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.628192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.628228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.628416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.628448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.628627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.628659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.628845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.628887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.629047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.629080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.629263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.629297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.629453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.629486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.629675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.629708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.629885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.629943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.630102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.630134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.630279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.630311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.630462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.630506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.630678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.630710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.630913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.630947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.631112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.631144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.631285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.631319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.631517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.631550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.631715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.631747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.631932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.631965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.632148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.632180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.632340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.632374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.632538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.632570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.666 [2024-07-15 08:04:24.632733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.666 [2024-07-15 08:04:24.632766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.666 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.632927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.632960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.633117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.633155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.633333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.633524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.633556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.633708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.633741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.633884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.633930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.634095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.634127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.634287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.634319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.634488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.634522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.634733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.634770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.634983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.635016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.635183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.635215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.635417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.635450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.635608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.635640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.635780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.635812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.636033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.636066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.636206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.636242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.636425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.636458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.636673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.636705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.636842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.636875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.637022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.637054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.637244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.637276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.637471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.637520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.637727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.637759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.637972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.638010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.638214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.638250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.638521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.638575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.638754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.638788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.638965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.638999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.639293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.639376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.639611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.639651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.639836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.639882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.640030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.640066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.640313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.640370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.640598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.640635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.640816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.640851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.641032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.641065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.641248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.641285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.641499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.641555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.641719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.667 [2024-07-15 08:04:24.641752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.667 qpair failed and we were unable to recover it. 00:37:33.667 [2024-07-15 08:04:24.641903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.642121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.642162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.642400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.642437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.642609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.642646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.642851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.642905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.643057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.643093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.643282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.643319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.643560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.643603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.643797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.643830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.643977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.644011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.644188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.644248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.644513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.644572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.644755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.644790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.644976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.645010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.645174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.645211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.645457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.645494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.645727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.645760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.645904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.645957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.646167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.646200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.646535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.646593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.646805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.646867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.647043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.647076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.647269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.647302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.647498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.647531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.647720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.647753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.647922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.647957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.648149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.668 [2024-07-15 08:04:24.648183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.668 qpair failed and we were unable to recover it. 00:37:33.668 [2024-07-15 08:04:24.648342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.648374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.648540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.648575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.648772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.648805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.648986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.649020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.649202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.649236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.649425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.649458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.649621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.649654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.649840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.649888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.650095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.650128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.650284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.650316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.650497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.650530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.650698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.650732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.650906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.650950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.651083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.651116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.651285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.651322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.651453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.651485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.651649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.651684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.651823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.651867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.652035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.652068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.652234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.652266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.652411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.652445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.652604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.652638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.652799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.652832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.653033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.653067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.653226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.653259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.653454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.653488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.653652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.653685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.669 [2024-07-15 08:04:24.653873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.669 [2024-07-15 08:04:24.653933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.669 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.654105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.654139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.654299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.654332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.654465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.654497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.654683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.654715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.654845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.654887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.655059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.655092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.655237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.655271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.655431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.655464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.655648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.655680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.655838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.655871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.656055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.656088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.656230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.656262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.656419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.656451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.656614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.656647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.656805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.656837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.657053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.657102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.657248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.657283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.657469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.657502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.657695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.657728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.657894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.657928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.658114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.658147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.658299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.658333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.658490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.658524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.658712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.658745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.658895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.658932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.659131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.659175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.659305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.659342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.659505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.659537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.659699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.659732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.670 qpair failed and we were unable to recover it. 00:37:33.670 [2024-07-15 08:04:24.659892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.670 [2024-07-15 08:04:24.659925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.660100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.660132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.660294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.660325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.660508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.660540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.660695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.660726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.660884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.660916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.661074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.661105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.661297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.661329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.661512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.661543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.661723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.661754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.661890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.661931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.662096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.662131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.662329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.662368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.662651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.662707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.662931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.662977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.663182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.663247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.663471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.663526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.663712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.663763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.663930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.663965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.664103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.664138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.664304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.664336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.664508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.664558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.664699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.664733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.664923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.664960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.665182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.665220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.665417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.665469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.665630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.665663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.665857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.665897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.666060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.666096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.671 [2024-07-15 08:04:24.666306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.671 [2024-07-15 08:04:24.666357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.671 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.666553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.666586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.666763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.666797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.666999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.667051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.667236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.667298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.667540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.667597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.667753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.667787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.667990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.668051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.668265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.668299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.668638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.668693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.668901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.668935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.669140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.669204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.669356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.669407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.669571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.669621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.669824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.669866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.670059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.670109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.670317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.670368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.670529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.670580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.670718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.670751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.670936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.670970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.671153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.671210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.671402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.671465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.671668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.671701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.671888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.671921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.672095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.672127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.672290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.672339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.672540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.672572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.672722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.672756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.672927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.672961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.673141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.673173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.673363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.672 [2024-07-15 08:04:24.673395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.672 qpair failed and we were unable to recover it. 00:37:33.672 [2024-07-15 08:04:24.673531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.673563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.673703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.673736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.673898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.673942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.674133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.674191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.674388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.674444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.674608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.674640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.674792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.674825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.674988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.675023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.675200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.675236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.675458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.675508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.675672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.675713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.675956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.675992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.676174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.676222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.676407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.676458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.676622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.676655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.676848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.676897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.677091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.677144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.677303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.677363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.677591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.677625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.677770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.677803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.677961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.678013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.678193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.678243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.678410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.678466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.678631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.678664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.678825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.678868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.679038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.679096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.679290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.679345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.679542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.679592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.679764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.673 [2024-07-15 08:04:24.679797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.673 qpair failed and we were unable to recover it. 00:37:33.673 [2024-07-15 08:04:24.679985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.680048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.680226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.680279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.680447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.680499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.680667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.680701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.680868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.680912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.681093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.681128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.681343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.681395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.681614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.681665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.681811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.681844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.682019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.682053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.682193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.682227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.682423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.682465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.682604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.682636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.682785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.682819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.683052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.683104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.683322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.683387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.683588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.683622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.683766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.683799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.683958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.684010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.684193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.684244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.684383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.684417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.684585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.684618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.684783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.674 [2024-07-15 08:04:24.684820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.674 qpair failed and we were unable to recover it. 00:37:33.674 [2024-07-15 08:04:24.685045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.685097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.685317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.685367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.685531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.685582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.685785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.685818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.686006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.686056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.686245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.686294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.686517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.686568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.686743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.686775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.686926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.686964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.687168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.687218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.687396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.687447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.687629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.687662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.687825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.687856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.688074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.688132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.688306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.688356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.688524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.688580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.688744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.688778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.688965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.689017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.689212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.689264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.689475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.689690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.689724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.689858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.689896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.690072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.690123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.690286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.690339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.690564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.690615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.690803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.690836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.691063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.691114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.691331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.691383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.675 [2024-07-15 08:04:24.691560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.675 [2024-07-15 08:04:24.691611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.675 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.691777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.691810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.691967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.692018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.692207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.692257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.692442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.692499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.692651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.692683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.692869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.692908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.693093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.693145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.693365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.693415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.693616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.693668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.693804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.693836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.694026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.694061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.694258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.694309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.694483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.694535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.694728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.694762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.694921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.694957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.695115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.695147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.695342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.695394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.695567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.695600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.695737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.695771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.695953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.696006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.696194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.696245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.696454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.696505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.696713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.696747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.696930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.696992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.697176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.697227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.697398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.697451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.697634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.697668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.697850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.697905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.676 [2024-07-15 08:04:24.698145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.676 [2024-07-15 08:04:24.698185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.676 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.698374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.698411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.698635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.698686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.698825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.698858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.699026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.699060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.699242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.699299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.699462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.699512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.699650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.699683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.699871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.699913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.700060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.700255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.700293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.700530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.700566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.700774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.700811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.701006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.701040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.701201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.701233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.701416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.701458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.701670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.701707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.701922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.701955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.702142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.702195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.702375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.702410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.702613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.702649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.702825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.702865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.703169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.703206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.703391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.703423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.703575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.703611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.703813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.703849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.704070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.704103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.704267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.704303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.704500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.677 [2024-07-15 08:04:24.704536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.677 qpair failed and we were unable to recover it. 00:37:33.677 [2024-07-15 08:04:24.704772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.704808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.704983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.705026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.705230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.705266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.705446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.705478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.705638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.705674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.705849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.705898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.706102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.706134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.706304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.706340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.706505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.706541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.706725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.706757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.706893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.706926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.707090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.707122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.707308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.707343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.707559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.707595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.707751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.707787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.707981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.708014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.708190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.708226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.708369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.708405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.708579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.708614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.708795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.708831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.709082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.709114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.709309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.709341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.709555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.709591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.709766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.709802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.709994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.710027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.710276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.710312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.710454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.710494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.710683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.710718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.710892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.678 [2024-07-15 08:04:24.710948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.678 qpair failed and we were unable to recover it. 00:37:33.678 [2024-07-15 08:04:24.711109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.711141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.711341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.711374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.711526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.711562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.711730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.711765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.711949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.711981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.712190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.712226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.712430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.712466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.712672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.712708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.712871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.712928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.713088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.713121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.713322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.713367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.713555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.713590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.713787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.713822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.714030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.714063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.714262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.714299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.714483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.714520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.714695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.714727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.714908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.714945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.715098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.715133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.715335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.715366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.715549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.715585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.715757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.715792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.715975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.716008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.716188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.716223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.716405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.716441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.716619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.716650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.716829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.716866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.717049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.717086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.717294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.717327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.717514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.717550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.717762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.717799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.718007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.718040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.679 qpair failed and we were unable to recover it. 00:37:33.679 [2024-07-15 08:04:24.718230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.679 [2024-07-15 08:04:24.718266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.718452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.718484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.718641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.718673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.718840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.718872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.719037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.719072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.719254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.719290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.719471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.719506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.719689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.719723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.719883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.719916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.720086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.720122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.720303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.720338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.720518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.720550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.720689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.720721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.720890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.720923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.721118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.721150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.721332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.721369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.721519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.721554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.721716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.721748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.721898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.721931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.722098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.722147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.722350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.722382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.722539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.722574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.722753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.722788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.722996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.723029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.680 [2024-07-15 08:04:24.723194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.680 [2024-07-15 08:04:24.723226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.680 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.723431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.723467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.723623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.723657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.723788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.723838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.724067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.724100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.724282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.724314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.724496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.724531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.724713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.724747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.724907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.724939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.725069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.725101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.725289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.725324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.725529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.725569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.725719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.725754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.725927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.725963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.726187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.726219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.726381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.726417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.726595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.726630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.726806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.726837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.727036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.727082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.727262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.727298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.727493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.727525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.727702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.727743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.727921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.727957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.728143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.728175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.728354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.728389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.728549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.728585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.728769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.728805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.729033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.729069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.729207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.729243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.729441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.729473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.729666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.729702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.729900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.729941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.681 qpair failed and we were unable to recover it. 00:37:33.681 [2024-07-15 08:04:24.730090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.681 [2024-07-15 08:04:24.730121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.730277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.730309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.730435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.730467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.730656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.730687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.730931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.730964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.731163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.731194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.731393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.731425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.731579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.731616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.731830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.731866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.732069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.732101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.732293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.732329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.732471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.732506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.732689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.732722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.732941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.732977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.733130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.733171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.733333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.733365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.733587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.733624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.733809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.733841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.734019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.734051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.734224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.734256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.734443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.734477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.734643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.734675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.734897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.734930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.735062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.735095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.735266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.735298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.735494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.735530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.735690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.735722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.735908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.735944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.736144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.736182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.682 [2024-07-15 08:04:24.736379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.682 [2024-07-15 08:04:24.736415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.682 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.736625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.736662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.736868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.736909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.737060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.737092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.737254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.737467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.737501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.737685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.737716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.737929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.737966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.738139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.738181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.738360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.738392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.738599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.738635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.738804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.738840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.739003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.739035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.739178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.739209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.739375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.739408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.739605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.739638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.739829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.739865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.740088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.740125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.740312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.740344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.740495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.740530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.740712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.740748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.740933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.740976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.741164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.741200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.741345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.741381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.741588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.741620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.741831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.741872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.742085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.742120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.742281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.742313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.742442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.742492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.742667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.742701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.683 [2024-07-15 08:04:24.742867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.683 [2024-07-15 08:04:24.742904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.683 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.743091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.743126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.743303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.743337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.743528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.743561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.743767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.743803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.743953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.743988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.744174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.744205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.744391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.744426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.744600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.744635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.744872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.744932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.745092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.745128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.745345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.745399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.745593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.745627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.745767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.745801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.746007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.746044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.746271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.746304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.746489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.746526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.746700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.746736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.746895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.746928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.747068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.747102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.747282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.747317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.747501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.747534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.747718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.747754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.747944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.747977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.748115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.748148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.748362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.748398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.748606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.748642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.748848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.748901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.749111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.749158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.749346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.684 [2024-07-15 08:04:24.749383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.684 qpair failed and we were unable to recover it. 00:37:33.684 [2024-07-15 08:04:24.749562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.749594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.749779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.749815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.749987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.750021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.750207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.750239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.750415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.750452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.750630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.750666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.750828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.750865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.751014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.751057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.751255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.751309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.751498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.751532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.751686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.751724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.751894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.751947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.752130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.752169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.752381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.752416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.752603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.752634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.752789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.752822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.752999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.753032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.753182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.753218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.753391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.753422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.753625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.753660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.753842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.753899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.754105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.754136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.754326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.754362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.754538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.754574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.754730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.754763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.754923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.754955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.755078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.755109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.755279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.755312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.755519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.755554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.755721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.685 [2024-07-15 08:04:24.755756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.685 qpair failed and we were unable to recover it. 00:37:33.685 [2024-07-15 08:04:24.755928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.755960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.756124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.756173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.756378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.756413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.756576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.756608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.756776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.756808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.757044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.757077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.757265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.757297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.757486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.757520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.757664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.757700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.757907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.757940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.758105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.758138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.758329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.758364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.758570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.758602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.758760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.758796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.758980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.759012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.759162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.759194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.759385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.759422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.759580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.759616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.759801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.759834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.760010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.760042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.760197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.760233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.760410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.760442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.760621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.686 [2024-07-15 08:04:24.760669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.686 qpair failed and we were unable to recover it. 00:37:33.686 [2024-07-15 08:04:24.760834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.760870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.761054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.761086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.761353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.761390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.761647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.761704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.761903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.761936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.762080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.762113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.762276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.762326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.762515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.762552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.762734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.762770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.762966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.762999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.763163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.763195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.763344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.763379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.763579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.763614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.763799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.763831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.764009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.764041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.764213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.764265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.764484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.764519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.764700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.764738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.764961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.764993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.765151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.765188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.765382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.765419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.765625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.765662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.765822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.765855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.766002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.766034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.766234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.766286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.766460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.766495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.766661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.766695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.766885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.766922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.687 [2024-07-15 08:04:24.767103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.687 [2024-07-15 08:04:24.767135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.687 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.767352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.767388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.767537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.767749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.767781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.767950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.767983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.768166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.768214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.768398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.768445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.768681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.768734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.768901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.768936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.769097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.769131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.769305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.769338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.769547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.769598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.769778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.769831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.770013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.770046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.770214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.770247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.770429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.770465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.770616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.770652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.770837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.770869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.771065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.771097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.771258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.771299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.771509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.771545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.771719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.771755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.771952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.771985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.772143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.772175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.772359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.772409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.772622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.772658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.772819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.772865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.773015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.773047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.773212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.688 [2024-07-15 08:04:24.773247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.688 qpair failed and we were unable to recover it. 00:37:33.688 [2024-07-15 08:04:24.773462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.773498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.773653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.773689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.773916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.773949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.774078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.774110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.774246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.774294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.774497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.774533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.774700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.774736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.774889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.774941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.775106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.775138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.775333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.775366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.775567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.775603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.775752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.775788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.776002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.776034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.776212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.776248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.776449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.776485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.776706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.776739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.776945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.776978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.777174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.777215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.777419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.777452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.777611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.777647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.777820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.777865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.778049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.778092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.778275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.778311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.778518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.778554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.778759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.778795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.778958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.778991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.779170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.779207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.779363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.779395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.779573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.779608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.689 [2024-07-15 08:04:24.779745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.689 [2024-07-15 08:04:24.779781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.689 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.779965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.779998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.780167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.780215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.780423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.780478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.780680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.780717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.780900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.780953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.781122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.781172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.781353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.781386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.781593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.781629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.781807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.781843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.782012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.782045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.782239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.782274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.782451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.782487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.782766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.782822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.783023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.783057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.783252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.783288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.783442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.783474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.783674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.783711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.783886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.783939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.784100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.784132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.784268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.784301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.784467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.784500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.784698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.784730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.784872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.784920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.785090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.785123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.785274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.785308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.785491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.785528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.785709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.785745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.785958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.785996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.786159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.786192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.690 qpair failed and we were unable to recover it. 00:37:33.690 [2024-07-15 08:04:24.786333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.690 [2024-07-15 08:04:24.786366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.786552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.786584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.786739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.786781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.786977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.787011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.787228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.787261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.787463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.787495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.787684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.787716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.787922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.787955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.788148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.788201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.788397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.788450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.788613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.788645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.788806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.788838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.789025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.789058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.789222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.789255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.789462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.789498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.789716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.789748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.789954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.789988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.790203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.790240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.790436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.790469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.790625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.790658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.790845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.790887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.791070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.791103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.791276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.791308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.791490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.791525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.791702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.791738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.791902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.791934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.792099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.792131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.792308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.792341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.792533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.792565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.792744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.792780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.691 [2024-07-15 08:04:24.792969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.691 [2024-07-15 08:04:24.793002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.691 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.793196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.793228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.793415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.793450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.793603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.793637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.793842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.793874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.794065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.794098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.794290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.794322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.794485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.794518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.794695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.794750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.794890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.794922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.795083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.795116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.795339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.795375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.795577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.795614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.795792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.795824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.795991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.796023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.796235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.796271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.796460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.796491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.796646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.796682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.796853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.796897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.797076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.797108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.797290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.797324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.797526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.797562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.797745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.692 [2024-07-15 08:04:24.797777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.692 qpair failed and we were unable to recover it. 00:37:33.692 [2024-07-15 08:04:24.797948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.797982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.798147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.798198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.798352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.798383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.798518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.798567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.798754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.798790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.798944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.798977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.799118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.799150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.799352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.799387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.799565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.799597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.799769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.799818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.800052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.800085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.800248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.800281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.800442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.800478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.800702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.800734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.800923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.801125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.801174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.801348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.801385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.801602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.801635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.801787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.801823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.802025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.802058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.802227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.802258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.802433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.802468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.802673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.802708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.802871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.802910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.803070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.803102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.803301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.803343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.803495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.803527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.803767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.803804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.804015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.804048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.804183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.804215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.693 qpair failed and we were unable to recover it. 00:37:33.693 [2024-07-15 08:04:24.804399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.693 [2024-07-15 08:04:24.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.804610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.804646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.804829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.804870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.805061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.805093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.805266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.805302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.805480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.805512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.805693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.805728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.805908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.805960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.806094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.806126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.806315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.806352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.806501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.806537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.806738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.806770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.806990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.807026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.807204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.807240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.807424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.807457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.807618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.807650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.807829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.807866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.808030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.808062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.808245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.808280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.808465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.808502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.808659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.808692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.808824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.808861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.809015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.809064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.809270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.809303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.809447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.809483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.809687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.809723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.809908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.809941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.810069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.810100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.694 [2024-07-15 08:04:24.810306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.694 [2024-07-15 08:04:24.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.694 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.810507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.810540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.810675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.810707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.810893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.810944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.811121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.811154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.811330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.811367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.811569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.811605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.811783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.811819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.812005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.812040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.812231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.812267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.812423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.812455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.812626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.812663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.812881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.812917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.813092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.813124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.813284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.813319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.813483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.813692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.813725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.813927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.813964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.814141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.814176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.814361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.814393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.814534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.814565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.814706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.814737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.814919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.814952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.815125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.815165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.815345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.815381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.815533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.815565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.815714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.815746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.815936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.815985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.816208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.816240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.816456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.816492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.695 [2024-07-15 08:04:24.816699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.695 [2024-07-15 08:04:24.816735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.695 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.816889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.816922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.817058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.817106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.817318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.817354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.817519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.817552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.817716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.817748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.817956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.817989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.818146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.818177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.818386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.818420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.818599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.818634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.818807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.818839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.819018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.819050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.819207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.819238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.819389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.819420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.819591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.819626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.819772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.819808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.819978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.820012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.820177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.820215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.820444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.820479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.820639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.820671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.820857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.820919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.821098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.821134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.821321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.821353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.821558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.821593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.821804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.821840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.822070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.822102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.822296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.822332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.822535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.822570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.822759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.822790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.822977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.823009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.823186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.823222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.696 [2024-07-15 08:04:24.823395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.696 [2024-07-15 08:04:24.823427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.696 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.823607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.823643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.823848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.823895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.824062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.824094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.824268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.824300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.824509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.824546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.824723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.824755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.824948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.824984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.825164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.825201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.825382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.825414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.825592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.825628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.825784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.825821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.826055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.826087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.826303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.826339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.826520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.826556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.826741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.826774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.826964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.827001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.827179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.827215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.827421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.827464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.827676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.827713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.827906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.827939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.828099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.828132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.828290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.828326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.828478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.828514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.828715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.828748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.828905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.828943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.829129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.829171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.829381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.829414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.829568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.829603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.829781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.829817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.697 qpair failed and we were unable to recover it. 00:37:33.697 [2024-07-15 08:04:24.830024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.697 [2024-07-15 08:04:24.830057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.830241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.830277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.830420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.830457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.830636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.830668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.830866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.830921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.831128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.831164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.831352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.831385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.831580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.831616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.831825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.831865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.832065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.832097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.832284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.832320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.832523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.832559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.832742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.832774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.832940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.832977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.833181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.833217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.833369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.833401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.833582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.833617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.833818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.833854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.834072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.834104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.834256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.834292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.834491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.834527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.834688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.834720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.834911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.834948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.698 [2024-07-15 08:04:24.835106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.698 [2024-07-15 08:04:24.835144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.698 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.835354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.835386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.835563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.835599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.835773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.835810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.835976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.836009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.836163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.836214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.836386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.836422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.836571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.836609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.836750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.836799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.837017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.837052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.837264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.837295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.837480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.837515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.837692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.837727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.837913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.837950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.838160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.838195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.838374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.838410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.838588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.838620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.838824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.838865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.839021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.839056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.839238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.839270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.839424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.839465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.839671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.839703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.839855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.839891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.840066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.840102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.840313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.840349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.840525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.840557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.840781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.840817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.840989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.841024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.841172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.841204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.841386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.841436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.841618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.841654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.841865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.841904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.842069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.842105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.842289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.842325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.842509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.842542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.842708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.842743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.699 [2024-07-15 08:04:24.842884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.699 [2024-07-15 08:04:24.842921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.699 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.843101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.843133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.843350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.843386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.843562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.843597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.843764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.843796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.843966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.843998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.844186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.844222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.844380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.844413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.844570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.844601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.844792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.844828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.845019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.845052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.845247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.845284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.845460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.845492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.845650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.845682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.845840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.845874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.846068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.846103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.846309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.846342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.846531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.846571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.846781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.846817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.847005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.847037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.847203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.847234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.847401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.847432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.847569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.847601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.847809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.848061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.848097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.848260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.848292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.848470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.848505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.848681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.848716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.848870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.848908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.849116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.849151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.849338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.849374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.849562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.849594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.849741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.849776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.849969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.850003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.700 qpair failed and we were unable to recover it. 00:37:33.700 [2024-07-15 08:04:24.850159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.700 [2024-07-15 08:04:24.850192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.850374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.850410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.850582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.850619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.850799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.850831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.850997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.851032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.851184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.851219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.851403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.851436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.851573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.851604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.851769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.851800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.852000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.852031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.852185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.852221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.852400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.852437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.852617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.852649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.852836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.852871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.853027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.853062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.853224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.853257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.853416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.853448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.853666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.853702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.853892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.853924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.854104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.854138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.854312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.854347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.854538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.854570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.854749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.854785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.854981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.855033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.855227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.855260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.855443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.855478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.855661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.855696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.855885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.855917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.856075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.856112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.856316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.856353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.856526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.856559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.856722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.856753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.856938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.856974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.857179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.701 [2024-07-15 08:04:24.857210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.701 qpair failed and we were unable to recover it. 00:37:33.701 [2024-07-15 08:04:24.857414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-15 08:04:24.857449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-15 08:04:24.857650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-15 08:04:24.857685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-15 08:04:24.857854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.702 [2024-07-15 08:04:24.857904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.702 qpair failed and we were unable to recover it. 00:37:33.702 [2024-07-15 08:04:24.858070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.858105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.858280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.858315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.858472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.858504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.858682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.858719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.858947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.858980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.859138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.859171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.859314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.859349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.981 [2024-07-15 08:04:24.859553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.981 [2024-07-15 08:04:24.859588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.981 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.859796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.859827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.859987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.860019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.860203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.860240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.860415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.860447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.860625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.860660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.860883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.860927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.861061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.861093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.861224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.861256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.861412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.861448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.861608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.861776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.861827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.861978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.862015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.862175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.862207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.862369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.862401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.862615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.862650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.862819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.862850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.863032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.863064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.863248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.863285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.863428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.863468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.863653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.863696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.863914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.863973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.864151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.864182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.864401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.864437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.864642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.864678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.864823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.864855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.865049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.865081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.865286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.865321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.865499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.865531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.865739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.865775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.865929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.865978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.866136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.866168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.866307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.866343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.866522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.866558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.866714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.866745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.866976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.867008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.867144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.867175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.982 qpair failed and we were unable to recover it. 00:37:33.982 [2024-07-15 08:04:24.867370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.982 [2024-07-15 08:04:24.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.867563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.867595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.867782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.867818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.868003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.868035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.868242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.868295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.868456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.868495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.868702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.868734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.868944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.868979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.869148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.869184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.869385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.869418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.869609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.869645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.869799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.869835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.870006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.870069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.870248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.870284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.870469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.870500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.870684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.870716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.870881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.870934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.871105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.871138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.871306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.871338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.871514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.871550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.871733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.871770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.871974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.872008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.872190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.872233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.872413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.872449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.872629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.872661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.872816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.872852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.873036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.873068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.873228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.873261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.873422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.873457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.873662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.873694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.873865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.873911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.874069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.874101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.874270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.874301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.874450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.874481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.874658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.874694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.874874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.874932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.875066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.875097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.875274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.875310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.875485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.875521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.983 qpair failed and we were unable to recover it. 00:37:33.983 [2024-07-15 08:04:24.875755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.983 [2024-07-15 08:04:24.875789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.875987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.876020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.876159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.876193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.876376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.876408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.876554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.876590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.876764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.876801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.877014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.877046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.877207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.877238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.877442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.877640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.877672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.877899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.877951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.878118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.878151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.878315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.878348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.878507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.878539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.878709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.878745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.878930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.878963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.879118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.879182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.879368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.879407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.879614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.879647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.879826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.879865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.880078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.880113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.880274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.880318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.880501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.880538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.880717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.880759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.880927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.880961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.881134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.881168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.881336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.881388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.881548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.881579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.881754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.881788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.882006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.882038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.882240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.882272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.882480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.882515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.882658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.882694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.882851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.882895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.883055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.883087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.883280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.883312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.883471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.883503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.883660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.883697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.984 qpair failed and we were unable to recover it. 00:37:33.984 [2024-07-15 08:04:24.883982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.984 [2024-07-15 08:04:24.884030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.884219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.884252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.884427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.884463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.884641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.884677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.884833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.884866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.885054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.885086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.885271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.885306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.885492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.885523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.885651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.885682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.885831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.885867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.886038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.886070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.886230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.886261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.886455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.886496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.886704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.886736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.886967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.887000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.887165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.887216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.887397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.887429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.887589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.887631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.887792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.887824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.887998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.888030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.888239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.888274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.888455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.888492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.888669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.888701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.888867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.888910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.889065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.889096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.889246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.889278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.889441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.889482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.889658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.889694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.889853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.889895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.890093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.890125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.890362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.890398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.890584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.890616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.890798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.890834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.891062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.891095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.891267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.891298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.891522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.891559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.891737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.891773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.891950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.891983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.985 qpair failed and we were unable to recover it. 00:37:33.985 [2024-07-15 08:04:24.892139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.985 [2024-07-15 08:04:24.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.892398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.892433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.892587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.892619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.892773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.892805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.893012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.893045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.893179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.893211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.893391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.893424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.893639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.893674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.893851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.893888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.894073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.894106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.894307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.894339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.894497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.894529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.894713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.894749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.894949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.894982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.895142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.895185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.895335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.895371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.895512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.895548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.895725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.895757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.895949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.895987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.896165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.896200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.896406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.896443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.896594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.896629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.896781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.896817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.896995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.897028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.897204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.897239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.897446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.897479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.897633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.897665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.897871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.897912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.898099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.898135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.898325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.898358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.898492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.898524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.986 qpair failed and we were unable to recover it. 00:37:33.986 [2024-07-15 08:04:24.898683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.986 [2024-07-15 08:04:24.898732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.898903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.898937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.899140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.899176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.899359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.899396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.899557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.899589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.899727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.899759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.899966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.900002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.900194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.900228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.900369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.900401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.900594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.900627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.900826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.900859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.901050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.901085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.901270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.901317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.901522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.901554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.901739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.901775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.901912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.901948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.902156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.902188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.902365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.902401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.902580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.902618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.902796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.902829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.902969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.903002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.903167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.903200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.903383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.903415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.903626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.903666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.903822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.903857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.904042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.904074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.904283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.904319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.904468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.904503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.904661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.904693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.904858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.904918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.905108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.905144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.905301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.905333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.905509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.905544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.905722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.905758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.905952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.905988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.906188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.906224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.906428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.906465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.906649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.906682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.906830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.987 [2024-07-15 08:04:24.906865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.987 qpair failed and we were unable to recover it. 00:37:33.987 [2024-07-15 08:04:24.907059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.907092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.907221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.907254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.907427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.907463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.907609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.907645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.907825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.907857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.908047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.908083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.908240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.908277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.908484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.908516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.908699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.908735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.908904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.908941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.909084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.909116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.909249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.909299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.909514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.909551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.909697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.909729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.909904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.909941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.910117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.910153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.910364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.910396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.910578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.910614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.910792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.910828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.911018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.911050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.911195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.911231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.911438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.911474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.911659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.911691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.911870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.911915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.912092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.912133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.912322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.912354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.912503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.912538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.912744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.912780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.912935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.912978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.913110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.913142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.913366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.913401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.913616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.913648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.913846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.913887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.914077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.914110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.914264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.914296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.914474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.914510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.914690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.914725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.914904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.914947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.915081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.915113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.988 [2024-07-15 08:04:24.915269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.988 [2024-07-15 08:04:24.915318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.988 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.915480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.915512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.915713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.915748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.915931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.915967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.916125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.916157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.916332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.916367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.916545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.916580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.916734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.916766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.916941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.916977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.917148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.917184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.917361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.917393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.917596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.917631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.917844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.917885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.918046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.918078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.918249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.918285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.918453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.918489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.918695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.918728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.918913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.918949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.919160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.919196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.919350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.919382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.919583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.919619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.919800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.919836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.920030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.920063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.920224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.920261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.920440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.920476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.920679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.920716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.920883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.920933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.921103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.921135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.921287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.921319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.921487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.921523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.921672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.921707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.921874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.921912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.922087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.922123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.922269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.922306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.922487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.922519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.922672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.922707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.922897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.922934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.923092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.923125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.923281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.923332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.923485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.923521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.989 [2024-07-15 08:04:24.923697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.989 [2024-07-15 08:04:24.923729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.989 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.923873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.923916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.924097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.924133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.924274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.924306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.924492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.924528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.924732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.924768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.924952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.924985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.925172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.925207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.925356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.925392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.925597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.925630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.925824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.925856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.926020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.926069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.926235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.926268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.926450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.926486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.926656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.926692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.926872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.926919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.927041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.927091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.927273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.927309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.927493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.927525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.927732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.927767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.927922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.927958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.928146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.928179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.928353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.928401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.928603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.928639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.928843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.928882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.929030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.929070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.929253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.929285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.929465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.929497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.929739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.929775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.929951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.930170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.930202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.930373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.930409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.930613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.930649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.930819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.930851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.990 [2024-07-15 08:04:24.930986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.990 [2024-07-15 08:04:24.931018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.990 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.931231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.931266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.931477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.931509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.931718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.931754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.931954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.931991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.932179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.932211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.932405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.932441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.932601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.932637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.932811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.932842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.932983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.933015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.933270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.933307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.933490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.933522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.933674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.933711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.933898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.933935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.934124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.934156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.934282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.934332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.934514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.934549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.934757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.934789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.934949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.934985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.935154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.935190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.935362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.935394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.935550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.935586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.935757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.935793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.935977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.936011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.936166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.936204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.936376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.936411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.936596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.936628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.936767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.936799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.937017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.937053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.937233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.937265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.937471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.937506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.937711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.937752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.937936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.937970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.938180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.938216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.938427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.938463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.938641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.938673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.938823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.938859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.939071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.939107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.939286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.991 [2024-07-15 08:04:24.939319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.991 qpair failed and we were unable to recover it. 00:37:33.991 [2024-07-15 08:04:24.939496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.939532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.939729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.939765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.939950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.939983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.940118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.940167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.940369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.940406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.940572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.940603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.940766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.940798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.940962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.940995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.941182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.941215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.941363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.941398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.941604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.941640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.941850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.941888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.942045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.942080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.942334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.942384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.942604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.942637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.942788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.942824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.943042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.943079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.943258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.943290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.943481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.943516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.943725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.943757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.943909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.943942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.944119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.944155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.944329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.944365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.944521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.944553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.944739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.944775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.944984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.945020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.945204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.945236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.945420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.945456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.945633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.945668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.945835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.945867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.946047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.946082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.946294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.946331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.946492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.946529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.946694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.946726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.946915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.946952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.947135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.947167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.947318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.947354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.947540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.947576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.992 [2024-07-15 08:04:24.947737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.992 [2024-07-15 08:04:24.947769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.992 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.947955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.947989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.948163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.948199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.948402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.948434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.948579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.948615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.948818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.948854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.949041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.949073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.949231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.949263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.949489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.949525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.949704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.949737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.949916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.949952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.950128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.950164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.950367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.950399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.950606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.950642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.950864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.950902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.951065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.951097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.951282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.951317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.951473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.951509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.951689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.951721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.951925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.951961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.952125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.952163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.952373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.952405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.952588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.952624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.952776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.952811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.953019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.953052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.953232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.953268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.953466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.953503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.953682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.953713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.953880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.953930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.954083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.954118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.954299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.954331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.954456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.954506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.954688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.954724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.954912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.954944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.955129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.955165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.955362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.955398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.955607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.955639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.955814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.955849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.956061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.993 [2024-07-15 08:04:24.956098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.993 qpair failed and we were unable to recover it. 00:37:33.993 [2024-07-15 08:04:24.956274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.956316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.956466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.956501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.956706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.956742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.956950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.956982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.957143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.957179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.957357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.957393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.957571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.957603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.957776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.957811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.957973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.958009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.958195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.958227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.958404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.958440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.958621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.958657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.958828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.958860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.959017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.959053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.959262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.959298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.959446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.959478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.959640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.959672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.959827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.959860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.960027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.960060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.960239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.960275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.960434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.960470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.960672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.960704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.960894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.960931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.961104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.961140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.961295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.961327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.961536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.961572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.961753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.961789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.961979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.962012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.962188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.962223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.962399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.962435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.962592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.962625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.962796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.962831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.962986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.963023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.963203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.963235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.963438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.963474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.963656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.963699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.963889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.963923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.964122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.964157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.994 qpair failed and we were unable to recover it. 00:37:33.994 [2024-07-15 08:04:24.964365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.994 [2024-07-15 08:04:24.964401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.964608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.964640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.964790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.964827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.965015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.965051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.965256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.965288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.965467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.965502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.965672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.965709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.965902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.965954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.966119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.966151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.966381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.966417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.966625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.966657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.966849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.966918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.967110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.967142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.967368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.967401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.967544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.967579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.967748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.967783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.967940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.967972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.968133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.968166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.968362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.968399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.968645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.968676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.968845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.968886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.969034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.969070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.969244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.969276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.969407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.969440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.969640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.969676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.969835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.969867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.970032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.970073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.970251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.970284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.970444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.970476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.970609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.970656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.970870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.970920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.971107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.971139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.971345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.995 [2024-07-15 08:04:24.971381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.995 qpair failed and we were unable to recover it. 00:37:33.995 [2024-07-15 08:04:24.971593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.971629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.971845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.971884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.972071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.972107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.972297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.972329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.972516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.972552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.972739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.972775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.972983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.973020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.973199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.973231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.973407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.973443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.973592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.973627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.973833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.973865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.974105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.974141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.974285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.974320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.974505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.974537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.974720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.974756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.974936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.974973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.975150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.975390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.975425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.975629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.975666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.975831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.975863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.976045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.976081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.976230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.976266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.976472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.976504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.976694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.976730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.976950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.976986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.977149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.977182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.977332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.977368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.977572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.977608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.977797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.977830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.977994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.978027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.978246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.978282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.978481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.978513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.978678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.978714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.978890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.978927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.979085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.979118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.979288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.979324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.979513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.979546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.979674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.979706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.996 [2024-07-15 08:04:24.979890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.996 [2024-07-15 08:04:24.979926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.996 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.980097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.980133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.980334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.980366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.980555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.980591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.980756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.980794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.980981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.981014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.981237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.981278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.981422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.981459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.981625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.981657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.981812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.981845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.982052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.982088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.982272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.982304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.982491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.982527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.982708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.982744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.982930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.982972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.983129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.983161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.983353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.983390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.983550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.983582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.983716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.983748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.983927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.983975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.984164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.984196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.984364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.984396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.984595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.984649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.984838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.984870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.985061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.985097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.985283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.985319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.985472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.985504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.985681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.985717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.985906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.985942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.986117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.986149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.986296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.986340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.986496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.986531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.986742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.986774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.986941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.986977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.987156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.987192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.987408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.987440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.987627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.987662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.987830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.987867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.988081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.988114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.988298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.988334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.997 qpair failed and we were unable to recover it. 00:37:33.997 [2024-07-15 08:04:24.988568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.997 [2024-07-15 08:04:24.988600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.988798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.988830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.988975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.989007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.989150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.989210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.989396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.989429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.989556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.989588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.989806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.989847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.990035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.990067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.990252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.990287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.990465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.990501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.990679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.990711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.990863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.990913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.991136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.991172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.991327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.991360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.991524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.991556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.991693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.991725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.991853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.991894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.992040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.992077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.992289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.992321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.992454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.992486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.992627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.992659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.992882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.992918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.993116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.993147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.993336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.993371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.993552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.993588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.993797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.993829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.993975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.994008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.994174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.994225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.994440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.994472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.994621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.994657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.994836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.994873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.995059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.995091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.995258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.995294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.998 qpair failed and we were unable to recover it. 00:37:33.998 [2024-07-15 08:04:24.995481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.998 [2024-07-15 08:04:24.995517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.995734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.995766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.995938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.995975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.996160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.996192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.996325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.996358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.996540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.996575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.996764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.996800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.996986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.997019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.997180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.997212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.997380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.997417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.997589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.997632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.997780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.997816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.998053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.998089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.998275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.998307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.998464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.998500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.998703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.998743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.998902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.998935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.999065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.999112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.999319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.999351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.999486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.999529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.999660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.999692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:24.999853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:24.999911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.000097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.000129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.000295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.000327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.000510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.000547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.000748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.000781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.000960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.000997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.001229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.001266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.001462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.001495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.001677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.001713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.001894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.999 [2024-07-15 08:04:25.001930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:33.999 qpair failed and we were unable to recover it. 00:37:33.999 [2024-07-15 08:04:25.002114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.002146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.002330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.002365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.002575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.002611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.002757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.002794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.002970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.003010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.003163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.003198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.003385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.003418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.003601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.003637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.003812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.003848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.004015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.004052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.004234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.004270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.004446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.004482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.004691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.004723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.004954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.004990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.005148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.005186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.005398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.005430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.005582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.005618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.005811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.005844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.006001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.006034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.006173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.006205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.006397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.006430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.006605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.006638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.006820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.006855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.007053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.007090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.000 qpair failed and we were unable to recover it. 00:37:34.000 [2024-07-15 08:04:25.007264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.000 [2024-07-15 08:04:25.007296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.007476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.007512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.007698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.007734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.007881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.007914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.008093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.008129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.008315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.008352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.008507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.008539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.008717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.008752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.008912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.008950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.009128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.009160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.009336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.009371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.009550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.009587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.009747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.009779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.009993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.010030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.010206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.010242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.010415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.010447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.010653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.010689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.010885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.010922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.011101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.011133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.011342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.011390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.011544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.011581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.011776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.011808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.011942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.011975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.012106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.012138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.012332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.012364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.012549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.012589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.012741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.012777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.012934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.012967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.013133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.013165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.013337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.013370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.013527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.013559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.013707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.013743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.001 [2024-07-15 08:04:25.013919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.001 [2024-07-15 08:04:25.013955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.001 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.014156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.014188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.014368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.014404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.014580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.014616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.014827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.014866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.015064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.015100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.015277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.015313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.015491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.015525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.015742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.015777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.015930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.015966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.016149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.016181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.016324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.016360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.016534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.016571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.016777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.016809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.016987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.017020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.017159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.017211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.017366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.017399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.017563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.017595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.017757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.017789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.017981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.018013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.018219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.018255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.018432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.018468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.018619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.018652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.018816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.018867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.019025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.019061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.019268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.019300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.019494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.019530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.019698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.019734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.019905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.019938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.020118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.020153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.020307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.020343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.020527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.002 [2024-07-15 08:04:25.020559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.002 qpair failed and we were unable to recover it. 00:37:34.002 [2024-07-15 08:04:25.020745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.020781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.020930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.020971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.021149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.021181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.021344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.021376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.021505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.021538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.021718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.021750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.021937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.021973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.022151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.022187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.022370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.022403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.022547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.022580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.022743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.022775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.022984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.023016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.023172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.023209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.023424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.023460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.023667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.023699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.023896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.023932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.024104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.024140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.024310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.024342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.024506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.024539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.024728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.024775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.024949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.024982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.025128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.025164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.025300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.025336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.025548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.025580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.025762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.025798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.025991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.026028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.026217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.026249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.026409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.026444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.026638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.026675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.003 [2024-07-15 08:04:25.026853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.003 [2024-07-15 08:04:25.026897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.003 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.027051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.027087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.027245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.027281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.027430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.027463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.027624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.027656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.027847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.027896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.028058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.028090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.028269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.028304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.028510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.028547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.028759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.028791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.028976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.029013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.029184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.029220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.029366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.029403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.029583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.029619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.029796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.029832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.030019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.030051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.030234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.030270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.030414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.030450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.030610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.030643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.030805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.030837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.031038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.031071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.031202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.031234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.031413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.031449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.031627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.031663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.031839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.031871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.032077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.032112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.032314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.032350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.032529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.032561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.032766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.032802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.032990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.033032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.033220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.033252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.033402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.033438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.033657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.033689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.033827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.004 [2024-07-15 08:04:25.033871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.004 qpair failed and we were unable to recover it. 00:37:34.004 [2024-07-15 08:04:25.034057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.034092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.034281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.034475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.034508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.034693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.034724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.034931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.034968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.035160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.035192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.035396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.035432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.035642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.035677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.035865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.035903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.036063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.036099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.036284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.036320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.036494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.036526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.036691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.036722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.036871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.036910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.037068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.037100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.037256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.037292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.037442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.037478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.037660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.037693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.037821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.037857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.038037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.038070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.038231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.038273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.038419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.038454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.038601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.038638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.005 qpair failed and we were unable to recover it. 00:37:34.005 [2024-07-15 08:04:25.038815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.005 [2024-07-15 08:04:25.038847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.038981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.039013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.039208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.039244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.039428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.039460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.039649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.039681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.039852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.039908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.040089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.040121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.040327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.040363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.040584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.040621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.040846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.040885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.041067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.041103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.041284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.041320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.041523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.041555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.041729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.041765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.041945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.041981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.042181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.042214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.042409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.042445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.042649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.042835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.042867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.043083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.043119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.043291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.043508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.043541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.043694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.043729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.043888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.043925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.044100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.044132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.044293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.044343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.044518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.044553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.044737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.044769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.044906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.044967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.045146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.045182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.045374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.045406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.045545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.006 [2024-07-15 08:04:25.045577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.006 qpair failed and we were unable to recover it. 00:37:34.006 [2024-07-15 08:04:25.045759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.045795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.045950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.045983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.046143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.046192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.046366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.046407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.046570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.046602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.046791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.046826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.047013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.047050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.047235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.047267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.047418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.047453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.047608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.047644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.047826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.047867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.048038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.048070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.048257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.048294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.048478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.048510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.048654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.048689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.048870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.048920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.049127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.049160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.049337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.049373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.049527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.049573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.049755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.049787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.049941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.049993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.050143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.050179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.050373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.050405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.050584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.050620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.050794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.050829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.051019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.051052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.051255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.051292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.051499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.051535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.051721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.051753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.051973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.007 [2024-07-15 08:04:25.052015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.007 qpair failed and we were unable to recover it. 00:37:34.007 [2024-07-15 08:04:25.052163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.052195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.052390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.052422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.052620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.052656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.052812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.052848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.053013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.053046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.053205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.053237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.053427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.053463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.053639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.053682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.053840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.053882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.054031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.054067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.054252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.054284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.054459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.054495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.054725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.054757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.054919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.054956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.055107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.055143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.055332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.055368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.055524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.055555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.055735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.055771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.055916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.055952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.056140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.056172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.056316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.056352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.056552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.056588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.056803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.056836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.057063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.057099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.057281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.057317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.057490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.057522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.057682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.057714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.057925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.057962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.058116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.058147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.058314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.058347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.008 qpair failed and we were unable to recover it. 00:37:34.008 [2024-07-15 08:04:25.058546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.008 [2024-07-15 08:04:25.058582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.058781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.058814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.058991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.059024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.059211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.059247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.059426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.059459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.059609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.059645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.059828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.059864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.060033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.060066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.060267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.060320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.060492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.060528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.060719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.060752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.060979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.061015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.061206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.061450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.061482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.061653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.061689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.061871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.061916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.062073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.062105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.062285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.062322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.062536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.062571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.062718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.062750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.062903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.062940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.063118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.063153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.063341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.063373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.063574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.063610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.063743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.063775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.063935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.063967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.064143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.064179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.064359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.064395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.064544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.064576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.009 [2024-07-15 08:04:25.064709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.009 [2024-07-15 08:04:25.064757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.009 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.064975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.065012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.065184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.065217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.065345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.065394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.065539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.065586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.065793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.065826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.066005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.066038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.066197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.066234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.066417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.066449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.066629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.066664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.066802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.066838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.067033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.067065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.067197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.067229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.067359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.067391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.067544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.067576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.067755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.067790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.067981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.068014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.068144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.068176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.068308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.068340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.068503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.068536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.068718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.068750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.068918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.068954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.069140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.069176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.069354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.069386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.069548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.069580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.069787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.069823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.070015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.070048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.070234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.070271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.070450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.070487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.070697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.070729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.070927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.070963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.010 [2024-07-15 08:04:25.071139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.010 [2024-07-15 08:04:25.071176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.010 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.071354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.071386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.071533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.071569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.071771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.071812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.072027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.072060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.072211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.072247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.072424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.072460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.072618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.072650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.072808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.072840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.073041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.073077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.073230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.073262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.073389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.073436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.073601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.073638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.073806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.073838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.074022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.074055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.011 qpair failed and we were unable to recover it. 00:37:34.011 [2024-07-15 08:04:25.074180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.011 [2024-07-15 08:04:25.074212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.074368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.074584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.074621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.074798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.074833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.075037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.075070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.075250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.075286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.075429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.075464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.075665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.075697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.075843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.075887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.076063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.076099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.076265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.076297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.076432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.076482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.076633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.076669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.076881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.012 [2024-07-15 08:04:25.076914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.012 qpair failed and we were unable to recover it. 00:37:34.012 [2024-07-15 08:04:25.077144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.077176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.077370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.077403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.077599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.077631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.077839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.077874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.078062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.078098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.078310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.078343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.078529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.078565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.078749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.078785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.078990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.079051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.079209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.079246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.079460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.079492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.079654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.079686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.079873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.079916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.080132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.080323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.080360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.080540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.080576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.080756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.080793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.080960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.080993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.081155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.081205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.081357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.081392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.081574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.081606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.081748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.081780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.081920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.081953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.082140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.082172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.082357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.082389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.082546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.082578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.082790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.082823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.082960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.083166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.083217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.083394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.083426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.083582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.083618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.013 qpair failed and we were unable to recover it. 00:37:34.013 [2024-07-15 08:04:25.083757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.013 [2024-07-15 08:04:25.083793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.084007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.084040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.084218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.084254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.084458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.084494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.084675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.084708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.084916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.084952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.085100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.085136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.085328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.085360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.085518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.085553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.085734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.085770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.085944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.085977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.086159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.086194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.086399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.086435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.086618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.086650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.086798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.086833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.086996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.087038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.087231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.087263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.087436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.087472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.087649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.087686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.087882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.087915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.088072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.088104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.088282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.088314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.088510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.088542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.088721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.088758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.088961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.088998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.089157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.089190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.089398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.089434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.089606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.089642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.089823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.089855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.090040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.090076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.090258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.090294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.090499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.090531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.090755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.014 [2024-07-15 08:04:25.090787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.014 qpair failed and we were unable to recover it. 00:37:34.014 [2024-07-15 08:04:25.090997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.091033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.091237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.091269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.091473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.091509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.091665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.091701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.091891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.091924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.092105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.092140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.092280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.092316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.092524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.092556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.092760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.092807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.092980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.093017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.093199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.093232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.093414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.093450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.093654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.093690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.093865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.093905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.094106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.094142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.094282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.094318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.094488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.094521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.094674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.094710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.094892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.094928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.095136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.095168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.095371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.095406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.095555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.095591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.095826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.095862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.096056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.096088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.096266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.096302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.096479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.096511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.096693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.096729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.096911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.096963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.097150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.097182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.097356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.097391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.015 [2024-07-15 08:04:25.097569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.015 [2024-07-15 08:04:25.097613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.015 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.097797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.097830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.097970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.098002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.098178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.098214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.098390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.098423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.098624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.098659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.098841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.098884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.099060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.099092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.099217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.099268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.099473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.099510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.099665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.099697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.099903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.099940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.100114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.100151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.100329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.100361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.100539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.100575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.100745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.100781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.100952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.100985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.101134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.101171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.101375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.101411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.101584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.101617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.101778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.101810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.101973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.102006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.102136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.102167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.102326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.102375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.102556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.102589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.102770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.102803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.102984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.103021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.103194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.103230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.103398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.103431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.103608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.103643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.103851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.016 [2024-07-15 08:04:25.103894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.016 qpair failed and we were unable to recover it. 00:37:34.016 [2024-07-15 08:04:25.104073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.104106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.104284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.104319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.104490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.104526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.104676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.104708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.104837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.104905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.105064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.105101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.105282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.105314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.105496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.105532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.105737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.105773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.105919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.105951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.106087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.106136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.106311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.106357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.106506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.106538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.106740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.106776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.106956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.106993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.107180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.107212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.107407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.107439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.107652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.107688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.107845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.107883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.108057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.108092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.108244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.108280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.108469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.108501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.108683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.108718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.108895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.108932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.109090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.109122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.109274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.109324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.109526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.109562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.109745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.109777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.109960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.109996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.110150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.017 [2024-07-15 08:04:25.110186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.017 qpair failed and we were unable to recover it. 00:37:34.017 [2024-07-15 08:04:25.110367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.110399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.110573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.110608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.110783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.110818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.111028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.111060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.111260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.111296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.111446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.111482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.111664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.111699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.111833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.111867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.112064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.112101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.112307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.112339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.112471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.112503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.112703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.112740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.112925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.112958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.113144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.113180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.113336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.113372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.113559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.113591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.113795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.113830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.114017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.114053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.114258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.114470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.114531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.114757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.114795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.114968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.115000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.115150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.115186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.115393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.115429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.115606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.018 [2024-07-15 08:04:25.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.018 qpair failed and we were unable to recover it. 00:37:34.018 [2024-07-15 08:04:25.115845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.115886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.116096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.116132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.116290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.116322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.116488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.116520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.116725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.116761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.116938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.116971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.117147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.117183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.117361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.117396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.117549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.117581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.117735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.117786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.117959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.117995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.118198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.118230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.118447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.118478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.118640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.118672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.118839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.118872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.119059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.119094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.119282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.119314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.119499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.119531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.119739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.119774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.119930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.119967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.120150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.120192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.120369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.120409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.120589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.120625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.120808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.120841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.121039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.121072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.121220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.121256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.121431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.121463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.121620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.121667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.121872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.121916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.122128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.019 [2024-07-15 08:04:25.122160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.019 qpair failed and we were unable to recover it. 00:37:34.019 [2024-07-15 08:04:25.122313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.122349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.122496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.122532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.122738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.122770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.122897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.122951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.123129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.123166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.123351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.123383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.123535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.123571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.123748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.123785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.123973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.124006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.124182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.124218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.124367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.124403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.124575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.124607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.124789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.124826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.125008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.125044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.125229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.125261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.125461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.125497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.125668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.125703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.125889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.125922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.126110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.126147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.126325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.126361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.126544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.126576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.126731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.126763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.126986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.127019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.127176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.127389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.127424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.127599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.127635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.127810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.127842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.128033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.128066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.128244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.128281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.128442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.020 [2024-07-15 08:04:25.128475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.020 qpair failed and we were unable to recover it. 00:37:34.020 [2024-07-15 08:04:25.128680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.128716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.128926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.128970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.129127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.129159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.129299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.129331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.129560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.129596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.129743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.129775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.129963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.129999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.130177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.130213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.130392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.130423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.130577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.130613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.130784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.130821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.131034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.131067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.131219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.131254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.131459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.131496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.131680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.131711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.131884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.131917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.132100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.132136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.132313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.132345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.132523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.132558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.132736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.132771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.132961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.132994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.133143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.133178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.133368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.133400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.133562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.133594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.133737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.133784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.133932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.133969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.134151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.134183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.134336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.134368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.134553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.134589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.134738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.021 [2024-07-15 08:04:25.134771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.021 qpair failed and we were unable to recover it. 00:37:34.021 [2024-07-15 08:04:25.134902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.134951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.135133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.135324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.135356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.135516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.135548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.135736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.135772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.135978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.136011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.136163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.136199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.136378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.136415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.136591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.136624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.136830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.136865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.137094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.137127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.137282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.137319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.137478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.137511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.137713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.137749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.137938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.137970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.138179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.138215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.138429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.138465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.138639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.138671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.138847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.138889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.139045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.139080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.139293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.139326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.139504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.139539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.139717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.139753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.139956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.139989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.140195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.140230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.140415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.140451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.140625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.140657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.140809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.140844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.141010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.141047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.141233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.141265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.141466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.141501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.022 qpair failed and we were unable to recover it. 00:37:34.022 [2024-07-15 08:04:25.141678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.022 [2024-07-15 08:04:25.141713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.141866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.141915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.142094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.142127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.142287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.142319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.142501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.142533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.142719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.142754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.142928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.142965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.143124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.143156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.143360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.143396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.143574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.143610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.143752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.143784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.143914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.143967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.144151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.144359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.144391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.144597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.144633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.144778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.144814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.144971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.145003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.145160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.145211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.145380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.145416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.145598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.145630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.145769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.145809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.145994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.146030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.146181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.146213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.146392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.146429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.146611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.146647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.023 [2024-07-15 08:04:25.146830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.023 [2024-07-15 08:04:25.146862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.023 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.147058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.147106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.147285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.147331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.147536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.147568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.147753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.147789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.147934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.147970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.148154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.148186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.148331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.148366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.148571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.148607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.148820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.148852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.148923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:34.024 [2024-07-15 08:04:25.149204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.149250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.149425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.149462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.149602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.149637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.149837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.149871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.150048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.150082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.150214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.150248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.150379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.150412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.150558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.150592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.150749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.150783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.150921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.150957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.151119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.151153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.151319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.151357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.151548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.151580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.151722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.151756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.151938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.151985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.152168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.152203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.152376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.152410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.152543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.152576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.152735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.152769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.152934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.152967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.153100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.153132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.153318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.153350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.024 [2024-07-15 08:04:25.153484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.024 [2024-07-15 08:04:25.153517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.024 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.153649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.153684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.153824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.153858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.154034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.154068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.154235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.154269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.154432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.154465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.154662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.154695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.154865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.154904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.155075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.155108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.155279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.155312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.155496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.155528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.155666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.155698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.155859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.155899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.156034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.156069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.156199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.156232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.156395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.156428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.156622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.156655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.156813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.156847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.157043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.157077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.157235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.157268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.157436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.157469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.157636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.157669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.157854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.157893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.158056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.158089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.158284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.158319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.158505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.158549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.158737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.158771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.158960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.158994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.159161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.159196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.159399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.159447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.159660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.159695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.159858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.159908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.160100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.025 [2024-07-15 08:04:25.160134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.025 qpair failed and we were unable to recover it. 00:37:34.025 [2024-07-15 08:04:25.160263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.160296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.160487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.160519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.160711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.160745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.160915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.160949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.161130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.161177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.161347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.161381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.161547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.161580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.161768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.161800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.161928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.161961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.162136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.162168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.162328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.162360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.162546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.162578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.162742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.162774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.162936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.162968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.163122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.163154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.163315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.163348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.163508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.163541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.163704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.163736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.163902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.163935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.164064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.164096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.164249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.164281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.164506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.164538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.164689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.164721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.164882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.164921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.165088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.165120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.165251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.165283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.165403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.165436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.165599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.165631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.165796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.165828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.165974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.166007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.166194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.166226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.166435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.166471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.166675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.166710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.166888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.026 [2024-07-15 08:04:25.166921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.026 qpair failed and we were unable to recover it. 00:37:34.026 [2024-07-15 08:04:25.167070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.167103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.167227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.167258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.167421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.167469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.167638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.167670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.167808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.167840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.168010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.168043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.168171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.168203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.168356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.168388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.168553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.168586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.168747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.168929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.168976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.169133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.169168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.169342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.169375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.169526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.169558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.169721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.169753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.169926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.169960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.170107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.170139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.170315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.170348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.170474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.170506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.170649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.170684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.170813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.170858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.171021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.171053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.171210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.171242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.171404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.171437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.171571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.171604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.171736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.171769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.171935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.171968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.172096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.172129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.172283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.172315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.172452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.172488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.172629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.172661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.172818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.172851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.173012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.027 [2024-07-15 08:04:25.173045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.027 qpair failed and we were unable to recover it. 00:37:34.027 [2024-07-15 08:04:25.173231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.173263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.173417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.173637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.173669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.173789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.173821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.173981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.174030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.174203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.174238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.174434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.174467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.174655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.174688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.174851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.174892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.175042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.175090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.175301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.175337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.175530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.175564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.175721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.175754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.175899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.175934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.176130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.176163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.176347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.176379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.176531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.176564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.176729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.176763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.176915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.176948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.177121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.177169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.177355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.177391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.177559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.177593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.177747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.177780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.177929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.177963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.178104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.178138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.178334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.178367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.178532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.178565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.178740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.178787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.178974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.179021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.179198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.179233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.179372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.179405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.179532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.179564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.179700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.179733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.179890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.028 [2024-07-15 08:04:25.179923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.028 qpair failed and we were unable to recover it. 00:37:34.028 [2024-07-15 08:04:25.180083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.180115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.180273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.180306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.180479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.180517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.180705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.180737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.180874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.180923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.181092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.181124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.181308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.181342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.181481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.181514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.181671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.181703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.181868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.181910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.182052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.182084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.182264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.182312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.182490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.182525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.182668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.182703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.182840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.182874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.183052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.183086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.183288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.183322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.183479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.183512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.183675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.183708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.183868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.183907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.184096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.184129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.184313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.184346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.184510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.184543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.184752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.184787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.184962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.185009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.185194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.185229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.185417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.185451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.185589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.185621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.185800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.185833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.186036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.186069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.186257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.186290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.186431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.029 [2024-07-15 08:04:25.186464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.029 qpair failed and we were unable to recover it. 00:37:34.029 [2024-07-15 08:04:25.186637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.186670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.186832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.186865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.187025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.187060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.187188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.187227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.187388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.187420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.187586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.187619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.187813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.187846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.188010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.188044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.030 [2024-07-15 08:04:25.188240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.030 [2024-07-15 08:04:25.188272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.030 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.188436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.188471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.188601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.188639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.188824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.188857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.189005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.189039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.189169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.189201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.189374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.189407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.189565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.189613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.189759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.189794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.189960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.189995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.190141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.190185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.190391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.190424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.190567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.190600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.190785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.190817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.190949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.309 [2024-07-15 08:04:25.190982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.309 qpair failed and we were unable to recover it. 00:37:34.309 [2024-07-15 08:04:25.191172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.191205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.191374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.191407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.191542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.191575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.191738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.191771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.191950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.191997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.192167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.192202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.192391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.192424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.192562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.192594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.192759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.192793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.192957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.192990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.193197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.193232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.193396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.193429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.193588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.193621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.193760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.193792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.193960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.193993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.194119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.194152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.194324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.194358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.194523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.194567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.194709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.194742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.194884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.194918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.195083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.195116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.195302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.195334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.195495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.195529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.195658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.195691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.195850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.195893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.196039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.196072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.196198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.196230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.196421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.196459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.196589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.196623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.196811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.196844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.196993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.197027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.197186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.197219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.310 qpair failed and we were unable to recover it. 00:37:34.310 [2024-07-15 08:04:25.197380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.310 [2024-07-15 08:04:25.197413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.197554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.197588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.197787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.197820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.197965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.197998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.198163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.198196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.198365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.198397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.198523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.198556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.198747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.198780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.198943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.198976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.199111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.199144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.199311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.199344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.199479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.199511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.199694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.199727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.199892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.199930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.200119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.200162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.200319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.200352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.200493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.200526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.200692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.200724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.200911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.200945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.201074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.201107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.201274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.201307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.201460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.201492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.201664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.201697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.201856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.201897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.202061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.202100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.202240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.202273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.202468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.202501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.202660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.202693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.202868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.202909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.203069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.203101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.203296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.203329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.311 qpair failed and we were unable to recover it. 00:37:34.311 [2024-07-15 08:04:25.203469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.311 [2024-07-15 08:04:25.203502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.203667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.203700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1252713 Killed "${NVMF_APP[@]}" "$@" 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.203832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.203865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.204033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.204066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:34.312 [2024-07-15 08:04:25.204259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:34.312 [2024-07-15 08:04:25.204292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:34.312 [2024-07-15 08:04:25.204435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.204469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.312 [2024-07-15 08:04:25.204629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.204663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.204818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.204850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.205033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.205066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.205241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.205274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.205416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.205448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.205588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.205620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.205785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.205818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.205962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.205995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.206155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.206187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.206383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.206416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.206600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.206637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.206846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.206896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.207070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.207114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.207308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.207341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.207507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.207546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.207674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.207707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 [2024-07-15 08:04:25.207896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.207933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1253329 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1253329 00:37:34.312 [2024-07-15 08:04:25.208093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.208127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1253329 ']' 00:37:34.312 [2024-07-15 08:04:25.208295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 [2024-07-15 08:04:25.208329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.312 [2024-07-15 08:04:25.208465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:34.312 [2024-07-15 08:04:25.208499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.312 qpair failed and we were unable to recover it. 00:37:34.312 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.312 [2024-07-15 08:04:25.208664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:34.313 [2024-07-15 08:04:25.208698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 08:04:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.313 [2024-07-15 08:04:25.208867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.208914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.209385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.209425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.209611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.209645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.209816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.209849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.210027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.210060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.210199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.210232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.210391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.210424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.210609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.210642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.210803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.210836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.211049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.211081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.211241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.211278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.211420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.211452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.211616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.211650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.211785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.211818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.212011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.212046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.212209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.212242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.212372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.212405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.212589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.212622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.212774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.212808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.213017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.213051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.213194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.213227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.213429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.213461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.213619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.213651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.213834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.213871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.214036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.214070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.214242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.214275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.214411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.214445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.214602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.313 [2024-07-15 08:04:25.214635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.313 qpair failed and we were unable to recover it. 00:37:34.313 [2024-07-15 08:04:25.214791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.214824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.214977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.215011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.215197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.215230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.215392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.215425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.215567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.215600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.215763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.215796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.215962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.215997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.216165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.216198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.216369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.216402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.216535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.216568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.216763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.216796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.216963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.216996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.217169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.217202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.217385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.217418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.217602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.217635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.217823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.217856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.217993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.218025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.218192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.218225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.218386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.218419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.218584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.218616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.218753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.218786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.218975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.219008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.219176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.219214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.219379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.219411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.219562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.219594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.219726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.219760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.219940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.219975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.220114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.220165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.220375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.220412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.220616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.314 [2024-07-15 08:04:25.220652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.314 qpair failed and we were unable to recover it. 00:37:34.314 [2024-07-15 08:04:25.220846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.220903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.221093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.221129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.221307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.221340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.221524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.221557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.221717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.221750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.221889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.221922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.222119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.222152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.222314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.222347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.222514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.222548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.222733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.222766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.222953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.222987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.223116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.223149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.223288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.223320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.223482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.223514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.223657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.223691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.223883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.223926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.224097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.224130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.224311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.224343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.224478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.224512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.224699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.224736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.224908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.224949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.225124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.225157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.225325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.225359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.225544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.225580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.315 qpair failed and we were unable to recover it. 00:37:34.315 [2024-07-15 08:04:25.225791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.315 [2024-07-15 08:04:25.225839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.226017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.226051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.226232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.226269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.226458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.226523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.226708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.226745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.226917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.226951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.227146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.227183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.227380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.227424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.227620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.227660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.227865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.227913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.228062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.228098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.228310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.228347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.228566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.228626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.228840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.228873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.229076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.229109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.229246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.229279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.229487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.229530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.229717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.229749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.229925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.229963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.230174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.230209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.230413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.230449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.230664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.230700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.230939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.230976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.231149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.231186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.231413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.231467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.231683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.231716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.231887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.231945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.232143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.232187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.232400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.232438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.232632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.232669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.232899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.232933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.316 [2024-07-15 08:04:25.233070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.316 [2024-07-15 08:04:25.233102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.316 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.233242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.233275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.233446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.233479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.233639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.233672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.233836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.233872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.234122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.234157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.234321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.234365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.234498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.234530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.234704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.234737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.234925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.234958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.235095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.235128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.235330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.235366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.235568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.235605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.235790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.235822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.235972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.236007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.236201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.236233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.236361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.236394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.236563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.236603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.236755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.236787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.236926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.236960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.237157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.237190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.237381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.237414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.237590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.237623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.237799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.237832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.237987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.238020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.238207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.238240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.238426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.238459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.238599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.238631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.238785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.238821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.239042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.239076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.239244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.239277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.239417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.239450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.239621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.239654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.239809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.239844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.240034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.240067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.240237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.317 [2024-07-15 08:04:25.240269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.317 qpair failed and we were unable to recover it. 00:37:34.317 [2024-07-15 08:04:25.240416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.240449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.240611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.240644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.240836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.240871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.241036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.241069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.241206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.241240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.241391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.241425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.241614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.241647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.241792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.241825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.242010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.242043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.242221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.242254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.242412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.242445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.242632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.242665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.242846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.242888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.243080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.243112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.243281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.243314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.243458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.243490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.243677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.243711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.243897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.243951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.244117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.244150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.244313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.244345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.244507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.244540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.244703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.244735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.244894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.244927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.245091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.245124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.245284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.245317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.245502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.245535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.245723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.245755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.245889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.245922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.246087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.246119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.246259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.246292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.246456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.246488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.246624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.246656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.246814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.246846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.247020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.247054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.318 qpair failed and we were unable to recover it. 00:37:34.318 [2024-07-15 08:04:25.247217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.318 [2024-07-15 08:04:25.247260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.247424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.247456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.247589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.247623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.247752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.247789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.247983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.248018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.248212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.248244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.248379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.248412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.248575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.248608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.248792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.248826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.249025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.249058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.249186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.249219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.249383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.249416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.249547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.249580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.249744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.249777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.249960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.250000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.250161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.250193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.250359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.250392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.250539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.250572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.250737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.250770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.250903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.250937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.251113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.251147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.251280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.251312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.251478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.251511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.251695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.251727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.251861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.251911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.252080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.252112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.252299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.252331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.252462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.252495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.252676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.252709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.319 [2024-07-15 08:04:25.252872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.319 [2024-07-15 08:04:25.252927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.319 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.253091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.253123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.253281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.253314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.253473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.253506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.253678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.253710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.253871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.253916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.254080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.254112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.254251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.254283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.254446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.254479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.254613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.254651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.254812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.254845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.254985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.255019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.255160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.255192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.255327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.255365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.255574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.255608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.255797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.255829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.255989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.256024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.256225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.256257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.256421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.256455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.256606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.256638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.256838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.256871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.257044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.257076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.257234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.257266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.257464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.257497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.320 qpair failed and we were unable to recover it. 00:37:34.320 [2024-07-15 08:04:25.257681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.320 [2024-07-15 08:04:25.257713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.257886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.257924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.258094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.258126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.258273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.258306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.258494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.258526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.258693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.258726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.258895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.258929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.259114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.259147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.259339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.259380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.259537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.259570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.259708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.259740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.259904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.260136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.260169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.260349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.260382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.260548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.260581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.260757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.260790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.260969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.261002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.261142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.261179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.261311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.261346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.261480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.261520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.261710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.261750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.261886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.261919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.262050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.262093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.262265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.262309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.262457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.262490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.262610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.262643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.262766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.262808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.262992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.263026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.263200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.263233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.263406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.263439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.263603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.263643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.263809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.321 [2024-07-15 08:04:25.263842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.321 qpair failed and we were unable to recover it. 00:37:34.321 [2024-07-15 08:04:25.264004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.264038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.264191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.264230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.264372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.264404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.264568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.264601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.264792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.264824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.264973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.265008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.265207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.265240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.265374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.265406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.265543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.265575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.265725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.265762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.265929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.265961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.266132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.266164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.266325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.266358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.266522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.266555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.266688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.266721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.266858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.266898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.267032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.267064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.267231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.267263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.267395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.267428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.267620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.267652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.267792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.267826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.268045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.268079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.268242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.268275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.268454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.268488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.268676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.268709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.268889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.268929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.269082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.269114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.269271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.269304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.269438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.269470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.269610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.322 [2024-07-15 08:04:25.269642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.322 qpair failed and we were unable to recover it. 00:37:34.322 [2024-07-15 08:04:25.269827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.269859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.270031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.270064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.270235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.270268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.270424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.270457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.270641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.270682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.270815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.270848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.271028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.271061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.271246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.271280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.271453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.271486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.271692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.271725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.271910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.271944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.272104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.272136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.272314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.272347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.272485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.272528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.272695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.272727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.272873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.272918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.273093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.273126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.273265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.273298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.273493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.273526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.273656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.273693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.273853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.273892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.274034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.274067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.274227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.274259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.274390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.274422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.274582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.274615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.274747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.274779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.274924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.274957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.275092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.275126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.275293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.275326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.275483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.275515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.275674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.275714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.275891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.275930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.276099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.276131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.276316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.276350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.323 [2024-07-15 08:04:25.276532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.323 [2024-07-15 08:04:25.276565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.323 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.276700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.276745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.276919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.276953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.277108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.277141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.277277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.277309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.277474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.277507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.277659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.277696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.277830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.277863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.278033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.278065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.278230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.278265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.278422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.278455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.278588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.278621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.278755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.278788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.278921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.278954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.279119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.279151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.279311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.279343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.279489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.279521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.279693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.279743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.279900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.279941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.280106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.280138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.280309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.280353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.280497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.280531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.280703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.280736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.280907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.280941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.281077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.281109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.281265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.281302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.281470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.281508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.281669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.281701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.281894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.281927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.282098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.324 [2024-07-15 08:04:25.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.324 qpair failed and we were unable to recover it. 00:37:34.324 [2024-07-15 08:04:25.282327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.282359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.282512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.282545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.282686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.282720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.282893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.282938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.283092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.283124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.283259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.283293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.283485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.283518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.283679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.283711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.283911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.283946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.284104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.284137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.284326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.284359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.284526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.284558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.284718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.284751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.284896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.284940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.285104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.285138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.285299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.285331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.285521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.285554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.285727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.285761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.285949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.285981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.286142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.286175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.286348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.286381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.286545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.286578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.286711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.286744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.286910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.286944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.287123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.287166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.287291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.287323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.287499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.287533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.287707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.287739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.287904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.325 [2024-07-15 08:04:25.287945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.325 qpair failed and we were unable to recover it. 00:37:34.325 [2024-07-15 08:04:25.288109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.288151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.288321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.288354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.288474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.288507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.288674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.288706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.288871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.288928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.289056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.289089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.289261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.289302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.289495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.289527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.289688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.289730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.289869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.289911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.290071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.290116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.290277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.290310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.290478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.290520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.290680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.290713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.290856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.290897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.291058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.291099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.291251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.291283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.291433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.291466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.291631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.291664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.291798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.291831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.292032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.292066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.292244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.292277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.292413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.292455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.292615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.292648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.292813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.292845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.293020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.293055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.293237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.293271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.293433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.293467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.326 qpair failed and we were unable to recover it. 00:37:34.326 [2024-07-15 08:04:25.293624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.326 [2024-07-15 08:04:25.293656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.293813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.293846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.294057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.294090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.294266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.294299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.294453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.294486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.294682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.294715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.294872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.294912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.295072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.295104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.295272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.295306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.295466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.295498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.295505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:34.327 [2024-07-15 08:04:25.295622] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:34.327 [2024-07-15 08:04:25.295656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.295688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.295850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.295902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.296069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.296101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.296291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.296324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.296485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.296519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.296683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.296715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.296855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.296911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.297098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.297148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.297356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.297392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.297556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.297590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.297746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.297779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.297947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.297981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.298168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.298201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.298351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.298384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.298526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.298726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.298759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.298924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.298992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.299129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.299172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.299336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.327 [2024-07-15 08:04:25.299369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.327 qpair failed and we were unable to recover it. 00:37:34.327 [2024-07-15 08:04:25.299529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.299561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.299727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.299765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.299921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.299955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.300088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.300120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.300290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.300323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.300508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.300541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.300732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.300765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.300899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.300933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.301097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.301129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.301306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.301339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.301460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.301492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.301626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.301824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.301867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.302040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.302072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.302242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.302274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.302480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.302514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.302658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.302691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.302854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.302905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.303044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.303077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.303214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.303247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.303379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.303412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.303545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.303577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.303738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.303771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.303944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.303978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.304141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.304183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.304339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.304372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.304511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.304544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.304705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.304737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.304930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.304963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.305104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.305136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.305308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.305341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.328 [2024-07-15 08:04:25.305498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.328 [2024-07-15 08:04:25.305531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.328 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.305689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.305721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.305889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.305922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.306048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.306081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.306247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.306280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.306476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.306509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.306670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.306868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.306907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.307087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.307136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.307312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.307348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.307500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.307533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.307700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.307745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.307920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.307955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.308094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.308127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.308325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.308358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.308484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.308517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.308674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.308706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.308868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.308926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.309097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.309132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.309279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.309312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.309450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.309483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.309667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.309699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.309871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.309911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.310075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.310108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.310285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.310318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.310507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.310540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.310707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.310739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.310900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.310933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.311090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.311123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.311294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.311327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.329 qpair failed and we were unable to recover it. 00:37:34.329 [2024-07-15 08:04:25.311483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.329 [2024-07-15 08:04:25.311515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.311679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.311712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.311870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.311911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.312072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.312105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.312241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.312273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.312410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.312443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.312587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.312619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.312781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.312819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.313006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.313039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.313168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.313212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.313374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.313408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.313558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.313590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.313725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.313757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.313891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.313924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.314127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.314176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.314326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.314361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.314553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.314587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.314773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.314805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.314979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.315012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.315200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.315234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.315399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.315432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.315565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.315598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.315733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.315765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.315926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.315959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.316127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.316168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.316305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.316337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.316472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.316504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.316637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.316669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.316866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.316904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.317037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.330 [2024-07-15 08:04:25.317070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.330 qpair failed and we were unable to recover it. 00:37:34.330 [2024-07-15 08:04:25.317240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.317272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.317435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.317468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.317625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.317657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.317823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.317856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.318025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.318058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.318223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.318256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.318417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.318449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.318607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.318640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.318797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.318830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.318970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.319003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.319173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.319206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.319326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.319359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.319489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.319521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.319678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.319711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.319865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.319927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.320101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.320137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.320297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.320331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.320493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.320532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.320701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.320733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.320924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.320958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.321121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.321153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.321284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.321316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.321464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.321497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.321657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.321690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.321846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.321888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.322053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.331 [2024-07-15 08:04:25.322085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.331 qpair failed and we were unable to recover it. 00:37:34.331 [2024-07-15 08:04:25.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.322282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.322445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.322479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.322650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.322683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.322839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.322872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.323042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.323077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.323249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.323282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.323437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.323629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.323661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.323820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.323853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.324046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.324094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.324234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.324268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.324437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.324470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.324634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.324667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.324828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.324860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.325029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.325062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.325244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.325276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.325433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.325466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.325605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.325645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.325815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.325850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.326023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.326056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.326202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.326234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.326388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.326420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.326600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.326633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.326796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.326829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.326998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.327033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.327185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.327219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.327389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.327422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.327577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.327609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.327766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.327798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.327967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.328000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.332 [2024-07-15 08:04:25.328160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.332 [2024-07-15 08:04:25.328193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.332 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.328321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.328358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.328521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.328553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.328708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.328741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.328895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.328928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.329064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.329096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.329275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.329310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.329458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.329491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.329655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.329688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.329850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.329899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.330086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.330121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.330265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.330298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.330480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.330513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.330669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.330702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.330845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.331073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.331122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.331276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.331311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.331476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.331509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.331672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.331704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.331838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.331873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.332025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.332058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.332194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.332226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.332356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.332388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.332547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.332579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.332716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.332750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.332897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.332933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.333067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.333101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.333238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.333271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.333437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.333470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.333662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.333695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.333855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.333906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.334046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.334079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.334221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.334254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.334414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.334447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.334608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.334640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.333 [2024-07-15 08:04:25.334803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.333 [2024-07-15 08:04:25.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.333 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.335040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.335089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.335268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.335304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.335499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.335534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.335699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.335732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.335872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.335911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.336077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.336119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.336269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.336301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.336464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.336497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.336686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.336719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.336897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.336946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.337116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.337150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.337291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.337324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.337459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.337492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.337651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.337684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.337848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.337898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.338055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.338088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.338276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.338309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.338474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.338507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.338672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.338705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.338855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.338904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.339062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.339095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.339280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.339329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.339518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.339566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.339710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.339744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.339930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.339964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.340098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.340131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.340266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.340299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.340461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.340495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.340645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.340678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.340810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.340842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.341030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.341063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.341194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.334 [2024-07-15 08:04:25.341226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.334 qpair failed and we were unable to recover it. 00:37:34.334 [2024-07-15 08:04:25.341411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.341443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.341576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.341609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.341741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.341773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.341949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.341982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.342151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.342193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.342356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.342388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.342544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.342577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.342757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.342790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.342940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.342973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.343136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.343169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.343298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.343332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.343492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.343525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.343688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.343721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.343855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.343900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.344042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.344076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.344242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.344275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.344408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.344440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.344601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.344633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.344789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.344964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.344996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.345182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.345214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.345378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.345410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.345566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.345598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.345734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.345766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.345951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.345983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.346156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.346205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.346349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.346384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.346584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.346617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.346778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.346812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.346954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.346987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.335 [2024-07-15 08:04:25.347155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.335 [2024-07-15 08:04:25.347189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.335 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.347345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.347377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.347539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.347572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.347738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.347771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.347979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.348029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.348200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.348234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.348397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.348431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.348616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.348648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.348835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.348868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.349038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.349070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.349202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.349234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.349391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.349423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.349589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.349621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.349781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.349816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.349969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.350003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.350173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.350207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.350375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.350408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.350538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.350571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.350714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.350747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.350911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.350945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.351145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.351178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.351343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.351376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.351504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.351537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.351692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.351730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.351897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.351931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.352115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.352147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.352326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.352360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.352551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.352584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.352746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.352779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.352938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.352972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.353106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.353139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.353327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.353360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.353521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.336 [2024-07-15 08:04:25.353554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.336 qpair failed and we were unable to recover it. 00:37:34.336 [2024-07-15 08:04:25.353687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.353720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.353890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.353923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.354060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.354236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.354268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.354429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.354462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.354600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.354633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.354797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.354830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.355020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.355068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.355213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.355248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.355383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.355416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.355574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.355607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.355769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.355802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.355971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.356005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.356171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.356205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.356360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.356392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.356562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.356596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.356728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.356760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.356921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.356955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.357115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.357148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.357311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.357344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.357507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.357540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.357703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.357736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.357899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.357948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.337 [2024-07-15 08:04:25.358097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.337 [2024-07-15 08:04:25.358131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.337 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.358293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.358326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.358489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.358523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.358657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.358690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.358821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.358854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.358994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.359026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.359183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.359215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.359388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.359426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.359592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.359627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.359820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.359853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.359998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.360031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.360159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.360192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.360319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.360351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.360513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.360546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.360702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.360735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.360903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.360937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.361101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.361134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.361296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.361340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.361529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.361561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.361725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.361758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.361890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.361923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.362097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.362129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.362290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.362323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.362483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.362515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.362652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.362684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.362815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.362855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.362997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.363029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.363161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.363194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.363354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.363386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.363544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.363577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.363735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.363767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.363904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.363936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.338 [2024-07-15 08:04:25.364100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.338 [2024-07-15 08:04:25.364132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.338 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.364291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.364324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.364458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.364490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.364650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.364682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.364848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.364886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.365044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.365077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.365237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.365269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.365428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.365461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.365624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.365656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.365812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.365844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.366017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.366050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.366215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.366248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.366421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.366453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.366586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.366794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.366826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.367020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.367075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.367281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.367318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.367489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.367534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.367676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.367709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.367869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.367911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.368102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.368135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.368262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.368295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.368434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.368467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.368602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.368636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.368815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.368862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.369032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.369067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.369209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.369243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.369433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.369466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.369629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.369661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.369806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.369839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.339 qpair failed and we were unable to recover it. 00:37:34.339 [2024-07-15 08:04:25.369973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.339 [2024-07-15 08:04:25.370007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.370194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.370228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.370367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.370400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.370538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.370572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.370737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.370770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.370907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.370940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.371096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.371144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.371316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.371357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.371496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.371529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.371715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.371748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.371910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.371943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.372133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.372165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.372327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.372376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.372574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.372609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.372775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.372817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.373001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.373034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.373195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.373228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.373392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.373424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.373579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.373611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.373766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.373798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.373943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.373976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 EAL: No free 2048 kB hugepages reported on node 1 00:37:34.340 [2024-07-15 08:04:25.374110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.374143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.374301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.374334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.374466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.374499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.374660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.374692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.374822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.374859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.374996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.375029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.375163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.375195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.375355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.375387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.375551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.375583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.375773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.375805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.375968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.376001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.340 [2024-07-15 08:04:25.376143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.340 [2024-07-15 08:04:25.376176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.340 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.376330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.376362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.376489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.376534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.376706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.376738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.376886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.376921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.377060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.377093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.377221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.377254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.377420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.377452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.377588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.377621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.377746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.377779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.377943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.377977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.378164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.378196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.378360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.378392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.378534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.378566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.378719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.378751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.378944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.378977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.379139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.379171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.379331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.379363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.379526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.379559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.379726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.379760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.379932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.379966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.380099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.380132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.380316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.380348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.380538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.380571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.380734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.380766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.380930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.380963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.381146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.381179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.381365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.381398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.381554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.381587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.381750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.381781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.381923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.381956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.382116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.382148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.341 qpair failed and we were unable to recover it. 00:37:34.341 [2024-07-15 08:04:25.382282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.341 [2024-07-15 08:04:25.382314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.382477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.382513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.382704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.382737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.382896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.382929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.383093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.383126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.383255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.383287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.383461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.383493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.383639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.383671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.383865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.383905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.384088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.384121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.384283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.384316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.384477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.384509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.384641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.384673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.384841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.384874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.385035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.385068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.385206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.385238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.385364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.385397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.385559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.385591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.385752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.385785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.385933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.385966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.386122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.386156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.386333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.386366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.386542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.386574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.386725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.386757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.386898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.386942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.342 qpair failed and we were unable to recover it. 00:37:34.342 [2024-07-15 08:04:25.387101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.342 [2024-07-15 08:04:25.387134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.387307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.387340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.387469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.387503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.387684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.387717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.387866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.387905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.388069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.388103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.388263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.388296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.388456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.388489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.388679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.388712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.388872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.388928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.389064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.389096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.389259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.389292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.389420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.389453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.389625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.389658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.389808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.389840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.390006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.390039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.390173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.390210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.390385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.390420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.390572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.390607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.390756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.390789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.390930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.390963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.391115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.391147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.391344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.391377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.391518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.391552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.391719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.391752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.391884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.391918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.392256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.392302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.392481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.392515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.392653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.392688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.392889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.392923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.343 [2024-07-15 08:04:25.393073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.343 [2024-07-15 08:04:25.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.343 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.393312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.393345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.393509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.393542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.393725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.393758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.393915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.393948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.394097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.394130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.394272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.394304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.394472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.394504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.394670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.394704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.394893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.394927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.395065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.395098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.395236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.395269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.395402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.395434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.395605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.395638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.395770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.395802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.395969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.396006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.396141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.396181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.396320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.396352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.396512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.396545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.396682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.396715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.396887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.396921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.397058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.397091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.397256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.397289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.397468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.397500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.397628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.397660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.397820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.397853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.398028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.398083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.398277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.398317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.398511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.398550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.398726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.398765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.398913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.398952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.399111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.399149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.344 [2024-07-15 08:04:25.399337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.344 [2024-07-15 08:04:25.399371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.344 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.399508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.399541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.399680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.399713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.399845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.399895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.400060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.400092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.400243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.400275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.400414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.400446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.400624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.400657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.400826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.400866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.401014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.401047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.401180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.401212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.401372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.401404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.401543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.401575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.401765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.401797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.401966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.401999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.402154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.402187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.402372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.402404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.402582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.402625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.402805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.402838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.403050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.403083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.403263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.403295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.403466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.403498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.403666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.403698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.403834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.403870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.404041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.404073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.404267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.404300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.404430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.404461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.404627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.404660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.404844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.404883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.405024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.405057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.405233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.405266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.405402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.405434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.405599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.345 [2024-07-15 08:04:25.405631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.345 qpair failed and we were unable to recover it. 00:37:34.345 [2024-07-15 08:04:25.405797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.405830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.406009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.406046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.406217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.406250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.406450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.406483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.406613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.406645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.406814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.406846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.407040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.407085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.407247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.407286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.407494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.407532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.407714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.407752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.407984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.408023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.408179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.408215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.408409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.408443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.408601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.408633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.408797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.408828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.409033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.409068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.409208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.409241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.409412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.409444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.409609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.409642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.409776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.409809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.409976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.410008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.410173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.410205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.410364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.410397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.410545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.410577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.410735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.410768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.410928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.410960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.411102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.411135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.411281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.411314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.411472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.411504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.411669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.411701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.411836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.411870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.412069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.412101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.412271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.346 [2024-07-15 08:04:25.412304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.346 qpair failed and we were unable to recover it. 00:37:34.346 [2024-07-15 08:04:25.412460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.412493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.412652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.412684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.412844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.412882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.413067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.413100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.413264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.413296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.413459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.413492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.413628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.413660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.413811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.413843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.414016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.414073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.414251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.414288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.414477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.414512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.414704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.414738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.414918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.414953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.415130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.415164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.415339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.415372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.415538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.415570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.415711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.415743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.415903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.415936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.416073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.416105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.416264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.416297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.416448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.416480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.416645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.416678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.416847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.416896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.417037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.417069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.417241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.417273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.417436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.417469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.417626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.417658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.417823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.417868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.418005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.418037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.418200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.418232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.418420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.347 [2024-07-15 08:04:25.418452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.347 qpair failed and we were unable to recover it. 00:37:34.347 [2024-07-15 08:04:25.418589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.418621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.418785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.418817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.418990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.419023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.419190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.419222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.419379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.419412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.419601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.419633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.419799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.419830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.419982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.420015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.420182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.420214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.420398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.420431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.420590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.420621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.420762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.420794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.420954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.420986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.421151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.421205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.421408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.421444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.421577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.421611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.421748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.421781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.421943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.421982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.422147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.422185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.422322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.422355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.422545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.422759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.422807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.423011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.423058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.423278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.348 [2024-07-15 08:04:25.423313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.348 qpair failed and we were unable to recover it. 00:37:34.348 [2024-07-15 08:04:25.423479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.423511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.423645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.423678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.423845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.423894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.424025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.424057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.424189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.424221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.424386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.424419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.424551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.424583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.424777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.424995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.425028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.425186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.425219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.425351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.425384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.425551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.425583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.425768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.425816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.426036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.426072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.426279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.426313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.426474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.426507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.426686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.426719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.426905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.426939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.427073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.427106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.427259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.427291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.427424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.427457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.427642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.427674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.427835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.427870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.428044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.428082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.428294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.428326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.428458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.428490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.428656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.428689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.428864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.428906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.429073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.429107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.349 [2024-07-15 08:04:25.429308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.349 [2024-07-15 08:04:25.429341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.349 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.429531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.429564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.429722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.429756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.429919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.429959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.430139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.430194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.430333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.430368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.430521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.430554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.430742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.430775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.430943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.430975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.431099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.431132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.431333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.431366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.431527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.431559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.431733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.431766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.431899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.431931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.432089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.432121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.432297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.432330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.432494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.432526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.432690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.432722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.432889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.432921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.433108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.433156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.433336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.433383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.433561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.433596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.433789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.433822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.433994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.434028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.434171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.434204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.434368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.434401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.434593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.434626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.434790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.434822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.435014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.435061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.435231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.435278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.435448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.350 [2024-07-15 08:04:25.435484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.350 qpair failed and we were unable to recover it. 00:37:34.350 [2024-07-15 08:04:25.435627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.435663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.435843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.435883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.436052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.436085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.436255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.436289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.436422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.436457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.436637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.436671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.436830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.436862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.437027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.437073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.437249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.437284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.437449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.437482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.437607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.437641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.437798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.437830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.438035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.438069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.438204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.438243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.438400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.438432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.438564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.438618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.438805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.438837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.438982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.439015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.439178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.439210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.439393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.439425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.439585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.439617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.439749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.439781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.439941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.439974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.440130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.440162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.440336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.440368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.440533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.440565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.440723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.440756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.440898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.440938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.441075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.441107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.441271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.441303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.441462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.441494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.441660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.441692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.351 [2024-07-15 08:04:25.441891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.351 [2024-07-15 08:04:25.441927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.351 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.442109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.442156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.442337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.442335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:34.352 [2024-07-15 08:04:25.442373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.442534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.442568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.442733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.442765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.442902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.442940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.443103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.443137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.443308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.443341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.443510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.443543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.443684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.443716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.443918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.443965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.444177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.444224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.444402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.444437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.444616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.444651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.444844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.444895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.445065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.445099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.445263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.445296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.445486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.445520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.445680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.445712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.445843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.445883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.446040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.446073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.446251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.446289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.446484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.446519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.446692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.446725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.446891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.446937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.447102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.447135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.447308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.447342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.447506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.447539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.447701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.447735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.447905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.447947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.448126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.448158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.448369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.352 [2024-07-15 08:04:25.448401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.352 qpair failed and we were unable to recover it. 00:37:34.352 [2024-07-15 08:04:25.448593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.448626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.448784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.448817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.448984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.449019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.449224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.449271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.449444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.449478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.449620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.449653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.449812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.449845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.449990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.450024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.450210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.450243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.450379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.450411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.450565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.450597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.450782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.450814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.450950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.450983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.451122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.451154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.451345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.451377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.451511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.451544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.451718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.451751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.451910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.451944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.452088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.452123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.452328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.452361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.452501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.452534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.452692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.452724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.452868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.452921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.453083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.453116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.453288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.453322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.453500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.453533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.453695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.453728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.453872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.453939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.454092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.454127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.454295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.454334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.353 qpair failed and we were unable to recover it. 00:37:34.353 [2024-07-15 08:04:25.454471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.353 [2024-07-15 08:04:25.454504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.454673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.454706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.454858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.454925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.455102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.455136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.455278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.455311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.455474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.455508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.455695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.455728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.455894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.455932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.456154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.456210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.456356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.456391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.456586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.456620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.456764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.456797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.456960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.456994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.457138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.457179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.457342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.457374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.457565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.457598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.457761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.457793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.457952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.457985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.458164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.458212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.458371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.458407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.458572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.458606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.458781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.458814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.458994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.459028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.459212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.459245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.459451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.459484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.459691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.459726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.459875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.459930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.460146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.460201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.460382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.460417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.460550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.460583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.460745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.460778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.460927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.460962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.354 [2024-07-15 08:04:25.461121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.354 [2024-07-15 08:04:25.461169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.354 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.461353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.461389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.461583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.461617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.461773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.461806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.461954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.461989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.462158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.462198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.462394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.462426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.462584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.462621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.462811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.462843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.463030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.463078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.463266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.463313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.463514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.463550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.463742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.463776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.463916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.463950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.464116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.464149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.464344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.464377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.464546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.464580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.464776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.464809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.465001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.465049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.465245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.465291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.465441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.465475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.465650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.465685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.465874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.465915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.466059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.466093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.466281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.466315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.466486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.466520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.466712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.466746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.466896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.466930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.467076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.467123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.467305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.467340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.467484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.467518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.467658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.467692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.467856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.467896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.468076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.468122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.468303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.468338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.468530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.468564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.468701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-07-15 08:04:25.468734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.355 qpair failed and we were unable to recover it. 00:37:34.355 [2024-07-15 08:04:25.468869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.468910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.469114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.469148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.469336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.469370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.469528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.469561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.469701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.469736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.469950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.469998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.470150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.470197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.470340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.470375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.470526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.470559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.470728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.470774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.470947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.470985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.471117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.471149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.471336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.471368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.471525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.471557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.471710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.471742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.471931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.471979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.472157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.472192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.472361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.472395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.472530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.472563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.472726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.472759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.472922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.472957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.473133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.473180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.473324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.473358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.473526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.473560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.473727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.473760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.473926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.473959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.474141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.474173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.474331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.474363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.474530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.474562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.474725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.474758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.474921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.474954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.475115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.475147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.475300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.475333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.475484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.475517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.356 [2024-07-15 08:04:25.475669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-07-15 08:04:25.475701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.356 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.475826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.475859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.476046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.476100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.476328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.476365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.476533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.476567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.476725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.476758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.476908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.476942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.477111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.477152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.477291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.477324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.477504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.477536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.477699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.477731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.477882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.477917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.478060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.478093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.478275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.478307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.478464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.478497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.478699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.478732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.478860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.478906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.479075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.479108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.479266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.479299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.479457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.479489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.479654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.479686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.479845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.479884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.480016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.480049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.480216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.480248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.480391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.480424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.480586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.480618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.480757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.480789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.480923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.480957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.481115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.481147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.481309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.481473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.481506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.481643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.481676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.481805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.481838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.481970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.482003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.482136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.482168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.482326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.482358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.357 qpair failed and we were unable to recover it. 00:37:34.357 [2024-07-15 08:04:25.482491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.357 [2024-07-15 08:04:25.482524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.482704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.482737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.482889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.482922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.483067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.483099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.483266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.483298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.483445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.483477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.483653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.483685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.483817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.483850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.484021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.484054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.484213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.484244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.484394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.484426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.484584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.484616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.484766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.484799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.484950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.484983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.485141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.485173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.485367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.485411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.485599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.485632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.485791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.485823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.485997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.486030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.486180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.486213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.486372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.486409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.486583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.486615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.486772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.486804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.486937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.486970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.487139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.487171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.487316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.487349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.487535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.487566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.487721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.487753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.487907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.487940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.488134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.358 [2024-07-15 08:04:25.488166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.358 qpair failed and we were unable to recover it. 00:37:34.358 [2024-07-15 08:04:25.488327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.488359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.488542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.488575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.488706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.488738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.488903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.488936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.489108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.489141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.489305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.489337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.489528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.489560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.489724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.489755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.489891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.489923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.490063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.490095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.490229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.490261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.490440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.490472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.490641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.490673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.490831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.490863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.491011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.491044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.491203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.491235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.491396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.491428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.491601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.491634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.491827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.491860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.492020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.492053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.492190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.492223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.492381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.492413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.492568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.492600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.492784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.492816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.492989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.493021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.493207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.493239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.493368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.493401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.493551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.493583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.493739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.493771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.493957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.493991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.494121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.494157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.494345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.494378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.494533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.494566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.494746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.494779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.494948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.494981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.495136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.495168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.495306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.495338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.495494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.495527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.359 [2024-07-15 08:04:25.495691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.359 [2024-07-15 08:04:25.495723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.359 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.495888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.495921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.496079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.496111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.496262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.496294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.496450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.496482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.496646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.496678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.496852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.496890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.497062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.497095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.497263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.497296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.497427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.497460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.497597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.497629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.497795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.497861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.498051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.498083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.498224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.498256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.498422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.498454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.498625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.498656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.498828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.498861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.499010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.499043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.499174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.499206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.499398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.499430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.499585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.499617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.499778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.499810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.499976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.500009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.500170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.500202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.500339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.500372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.500510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.500543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.500700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.500732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.500920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.500953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.501103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.501136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.501340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.501374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.501508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.501541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.501699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.501732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.501874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.501917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.502080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.502113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.502272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.502304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.502459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.502491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.502656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.502688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.360 [2024-07-15 08:04:25.502829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.360 [2024-07-15 08:04:25.502861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.360 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.503045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.503077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.503370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.503402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.503601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.503633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.503796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.503828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.504024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.504057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.504267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.504299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.504436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.504630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.504662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.504830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.504861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.505065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.505097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.505245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.505277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.505435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.505468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.505607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.505640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.505817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.505849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.506074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.506120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.506312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.506349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.506544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.506578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.506745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.506778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.506924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.506958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.507130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.507163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.507364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.507398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.507600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.507648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.507823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.507858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.508037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.508072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.508252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.508288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.508472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.508505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.508681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.508715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.508850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.508893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.509066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.509099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.509275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.509307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.509478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.509511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.509701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.509735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.509889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.509922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.510117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.510151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.510343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.510380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.510529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.510562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.510721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.361 [2024-07-15 08:04:25.510753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.361 qpair failed and we were unable to recover it. 00:37:34.361 [2024-07-15 08:04:25.510907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.510944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.511075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.511107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.511292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.511324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.511483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.511516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.511701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.511733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.511868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.511921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.512091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.512125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.512292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.512325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.512511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.512544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.512672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.512710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.512892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.512930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.513099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.513133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.513328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.513361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.513528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.513562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.513702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.513737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.513900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.513933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.514080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.514113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.514275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.514307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.514442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.514474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.514639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.514671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.514832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.514864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.515020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.515054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.515227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.515275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.515441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.515478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.515669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.515718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.515891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.515930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.516102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.516135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.516317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.516350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.516512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.516544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.516707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.516739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.362 [2024-07-15 08:04:25.516894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.362 [2024-07-15 08:04:25.516931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.362 qpair failed and we were unable to recover it. 00:37:34.645 [2024-07-15 08:04:25.517074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.645 [2024-07-15 08:04:25.517110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.645 qpair failed and we were unable to recover it. 00:37:34.645 [2024-07-15 08:04:25.517247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.645 [2024-07-15 08:04:25.517281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.645 qpair failed and we were unable to recover it. 00:37:34.645 [2024-07-15 08:04:25.517449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.645 [2024-07-15 08:04:25.517494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.645 qpair failed and we were unable to recover it. 00:37:34.645 [2024-07-15 08:04:25.517660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.645 [2024-07-15 08:04:25.517693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.645 qpair failed and we were unable to recover it. 00:37:34.645 [2024-07-15 08:04:25.517833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.517866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.518036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.518068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.518222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.518260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.518392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.518424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.518573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.518609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.518788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.518821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.518995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.519042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.519188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.519224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.519386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.519421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.519570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.519604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.519768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.519802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.519937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.519971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.520117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.520150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.520311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.520343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.520503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.520535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.520698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.520731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.520885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.520922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.521089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.521123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.521255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.521288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.521479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.521512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.521697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.521730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.521909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.521948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.522119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.522153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.522311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.522344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.522541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.522707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.522739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.522916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.522949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.523108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.523141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.523351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.523384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.523528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.523562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.523695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.523728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.523893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.523926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.524082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.524114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.524282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.524316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.524476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.524508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.524665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.524697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.524864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.524903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.525030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.525063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.525227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.525260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.525429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.646 [2024-07-15 08:04:25.525461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.646 qpair failed and we were unable to recover it. 00:37:34.646 [2024-07-15 08:04:25.525589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.525622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.525784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.525817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.525990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.526028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.526199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.526247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.526412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.526459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.526666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.526715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.526857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.526896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.527080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.527113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.527280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.527313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.527478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.527512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.527666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.527705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.527883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.527918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.528084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.528119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.528264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.528299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.528496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.528530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.528671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.528704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.528870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.528925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.529141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.529188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.529366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.529401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.529543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.529577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.529738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.529771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.529913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.529947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.530105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.530138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.530325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.530358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.530520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.530554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.530753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.530786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.530976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.531024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.531169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.531350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.531387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.531560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.531595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.531787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.531821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.531988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.532022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.532162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.532196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.532333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.532366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.532503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.532537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.532728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.532761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.532926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.532960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.533146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.533181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.533354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.533390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.533554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.533588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.647 [2024-07-15 08:04:25.533762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.647 [2024-07-15 08:04:25.533796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.647 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.533939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.533974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.534136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.534174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.534337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.534369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.534566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.534600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.534776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.534824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.535041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.535077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.535215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.535249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.535442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.535475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.535612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.535645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.535851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.535907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.536084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.536120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.536284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.536317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.536513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.536546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.536755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.536789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.536949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.537017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.537189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.537223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.537355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.537389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.537557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.537591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.537755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.537789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.537954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.537989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.538169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.538202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.538368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.538404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.538572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.538606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.538762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.538794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.538969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.539003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.539170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.539203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.539368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.539401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.539559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.539592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.539784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.539832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.539982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.540018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.540221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.540257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.540394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.540428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.540606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.540640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.540799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.540833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.541005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.541039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.541201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.541236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.541426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.541461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.541611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.541646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.541780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.541814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.542015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.648 [2024-07-15 08:04:25.542049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.648 qpair failed and we were unable to recover it. 00:37:34.648 [2024-07-15 08:04:25.542238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.542271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.542416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.542454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.542592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.542625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.542780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.542813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.542981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.543014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.543190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.543223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.543411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.543443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.543608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.543641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.543806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.543838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.544035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.544071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.544228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.544261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.544432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.544465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.544597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.544630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.544789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.544821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.544989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.545022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.545213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.545246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.545403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.545435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.545596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.545629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.545771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.545805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.545974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.546009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.546179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.546215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.546376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.546410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.546601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.546635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.546804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.546838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.547013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.547048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.547204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.547251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.547424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.547459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.547596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.547630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.547777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.547811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.547996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.548029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.548166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.548198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.548385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.548417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.548557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.548590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.548731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.548763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.548949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.548982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.549118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.549151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.549313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.549345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.549504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.549537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.549711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.549743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.649 qpair failed and we were unable to recover it. 00:37:34.649 [2024-07-15 08:04:25.549911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.649 [2024-07-15 08:04:25.549945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.550132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.550179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.550363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.550398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.550577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.550614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.550753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.550787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.550982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.551016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.551181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.551216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.551383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.551416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.551603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.551636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.551783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.551817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.551999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.552059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.552200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.552234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.552428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.552461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.552596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.552629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.552753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.552785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.552923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.552956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.553124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.553157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.553292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.553324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.553490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.553523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.553652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.553685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.553860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.553924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.554104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.554142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.554286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.554321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.554528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.554561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.554727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.554760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.554930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.554965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.555101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.555135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.555306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.555340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.555503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.555536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.555677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.555716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.555912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.555945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.556121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.556153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.556343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.556376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.556560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.650 [2024-07-15 08:04:25.556593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.650 qpair failed and we were unable to recover it. 00:37:34.650 [2024-07-15 08:04:25.556787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.556823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.556998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.557033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.557173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.557207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.557379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.557413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.557544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.557578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.557753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.557801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.557969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.558016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.558159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.558196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.558396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.558430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.558607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.558641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.558804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.558837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.559012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.559047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.559229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.559277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.559447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.559482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.559652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.559685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.559845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.559884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.560052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.560085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.560284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.560317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.560468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.560501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.560660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.560693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.560845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.561102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.561149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.561345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.561391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.561566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.561601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.561764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.561798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.561944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.561979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.562140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.562174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.562355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.562389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.562548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.562581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.562750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.562784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.562964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.563000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.563160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.563199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.563371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.563416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.563600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.563636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.563777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.563811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.563989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.564028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.564179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.564228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.564398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.564431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.564564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.564597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.564763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.564797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.651 qpair failed and we were unable to recover it. 00:37:34.651 [2024-07-15 08:04:25.564937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.651 [2024-07-15 08:04:25.564972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.565143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.565191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.565363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.565400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.565531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.565564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.565715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.565749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.565945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.565979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.566180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.566213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.566407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.566442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.566589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.566623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.566770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.566803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.566954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.566989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.567209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.567257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.567403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.567438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.567609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.567644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.567793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.567826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.568012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.568060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.568197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.568231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.568402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.568452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.568655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.568695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.568840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.568880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.569050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.569084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.569224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.569258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.569484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.569517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.569709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.569743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.569891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.569925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.570101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.570149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.570304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.570340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.570489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.570526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.570668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.570703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.570944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.570978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.571127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.571162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.571328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.571362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.571553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.571586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.571761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.571795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.571964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.572000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.572163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.572201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.572366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.572399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.572640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.572674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.572840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.572875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.573021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.573054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.652 qpair failed and we were unable to recover it. 00:37:34.652 [2024-07-15 08:04:25.573194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.652 [2024-07-15 08:04:25.573243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.573423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.573456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.573614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.573646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.573809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.573842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.574034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.574068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.574275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.574308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.574439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.574472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.574627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.574660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.574867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.574907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.575045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.575078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.575218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.575252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.575418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.575450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.575594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.575627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.575755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.575788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.575950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.575983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.576174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.576207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.576370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.576402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.576550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.576586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.576786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.576820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.576992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.577027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.577161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.577196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.577337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.577385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.577564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.577598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.577743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.577778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.577958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.578005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.578233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.578269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.578452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.578499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.578688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.578722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.578892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.578927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.579077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.579112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.579303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.579336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.579467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.579500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.579722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.579758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.579946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.579994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.580140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.580175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.580372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.580411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.580547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.580581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.580743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.580776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.580937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.580971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.581163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.581198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.581368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.653 [2024-07-15 08:04:25.581401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.653 qpair failed and we were unable to recover it. 00:37:34.653 [2024-07-15 08:04:25.581576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.581612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.581780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.581813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.581949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.581982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.582145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.582177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.582310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.582343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.582506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.582539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.582732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.582765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.582958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.582991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.583134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.583167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.583307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.583339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.583509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.583545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.583738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.583772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.583946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.583982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.584144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.584178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.584317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.584351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.584541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.584574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.584740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.584774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.584944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.584978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.585175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.585207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.585334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.585366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.585498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.585530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.585677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.585710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.585890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.585926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.586090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.586123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.586289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.586323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.586506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.586539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.586706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.586739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.586902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.586936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.587106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.587140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.587274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.587307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.587446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.587496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.587684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.587716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.587843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.587883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.588015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.588047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.654 [2024-07-15 08:04:25.588181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.654 [2024-07-15 08:04:25.588218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.654 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.588379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.588411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.588579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.588611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.588800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.588834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.588979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.589012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.589156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.589190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.589401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.589435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.589589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.589621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.589784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.589817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.589984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.590018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.590191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.590223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.590391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.590423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.590560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.590593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.590727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.590759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.590910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.590943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.591114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.591149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.591311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.591344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.591532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.591565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.591725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.591757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.591934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.591972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.592115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.592153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.592296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.592329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.592496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.592530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.592667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.592701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.592892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.592940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.593117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.593153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.593347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.593380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.593550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.593583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.593744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.593776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.593953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.593988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.594154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.594186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.594352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.594385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.594552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.594584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.594771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.594806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.594948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.594982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.595159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.595193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.595351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.595384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.595571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.595604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.595802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.595839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.596024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.596059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.655 [2024-07-15 08:04:25.596222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.655 [2024-07-15 08:04:25.596259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.655 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.596400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.596433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.596619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.596651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.596808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.596841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.597008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.597042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.597206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.597241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.597429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.597462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.597614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.597648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.597782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.597816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.597978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.598012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.598183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.598216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.598346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.598380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.598536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.598568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.598724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.598757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.598897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.598932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.599069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.599101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.599238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.599271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.599400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.599432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.599620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.599653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.599791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.599823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.599965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.599999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.600186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.600218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.600408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.600441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.600632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.600665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.600794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.600826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.601024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.601058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.601214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.601247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.601414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.601447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.601583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.601616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.601804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.601836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.601988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.602021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.602210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.602243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.602376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.602407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.602574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.602607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.602766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.602799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.603015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.603064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.603260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.603295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.603471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.603508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.603676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.603710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.603874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.656 [2024-07-15 08:04:25.603914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.656 qpair failed and we were unable to recover it. 00:37:34.656 [2024-07-15 08:04:25.604074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.604112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.604280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.604312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.604484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.604517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.604662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.604696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.604874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.604927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.605124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.605158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.605361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.605399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.605590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.605625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.605794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.605829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.606012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.606046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.606193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.606227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.606390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.606423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.606616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.606651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.606824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.606860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.607047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.607080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.607244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.607278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.607408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.607443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.607656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.607689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.607872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.607917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.608086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.608119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.608310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.608343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.608476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.608509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.608673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.608707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.608845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.608887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.609029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.609062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.609259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.609306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.609451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.609486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.609639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.609675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.609869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.609909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.610041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.610074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.610238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.610270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.610478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.610512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.610673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.610718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.610891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.610926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.611096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.611132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.611270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.611304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.611466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.611499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.611664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.611697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.611831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.611865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.612038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.612071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.657 [2024-07-15 08:04:25.612234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.657 [2024-07-15 08:04:25.612273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.657 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.612434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.612467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.612641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.612677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.612813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.612846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.613037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.613088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.613290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.613326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.613450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.613483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.613655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.613689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.613885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.613920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.614056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.614088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.614283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.614316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.614492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.614526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.614689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.614721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.614924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.614959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.615146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.615184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.615355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.615388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.615534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.615567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.615756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.615789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.615957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.615992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.616133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.616166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.616301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.616335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.616522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.616555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.616692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.616726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.616888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.616922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.617111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.617143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.617305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.617338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.617503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.617537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.617708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.617741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.617908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.617942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.618100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.618132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.618290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.618323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.618514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.618546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.618716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.618749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.618886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.618934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.619095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.619128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.619258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.658 [2024-07-15 08:04:25.619292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.658 qpair failed and we were unable to recover it. 00:37:34.658 [2024-07-15 08:04:25.619455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.619487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.619654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.619687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.619816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.619849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.619987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.620020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.620189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.620228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.620363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.620396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.620533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.620565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.620727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.620760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.620902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.620936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.621090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.621137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.621282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.621319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.621484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.621518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.621715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.621749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.621889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.621922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.622117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.622150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.622310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.622343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.622519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.622553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.622685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.622718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.622891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.623112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.623159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.623324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.623371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.623552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.623588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.623756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.623789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.623963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.623998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.624133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.624181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.624341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.624374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.624535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.624568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.624747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.624784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.624960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.625007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.625195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.625243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.625432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.625467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.625648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.625685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.625862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.625903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.626070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.626103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.626270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.626303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.626472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.626505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.626669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.626705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.626870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.626912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.627080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.627114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.659 [2024-07-15 08:04:25.627276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.659 [2024-07-15 08:04:25.627310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.659 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.627503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.627536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.627668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.627701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.627840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.627872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.628068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.628115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.628264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.628304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.628456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.628493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.628694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.628730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.628908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.628943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.629107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.629140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.629308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.629342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.629531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.629563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.629696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.629730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.629872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.629911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.630096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.630143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.630283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.630318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.630512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.630546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.630701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.630734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.630869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.630917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.631077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.631115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.631282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.631315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.631521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.631581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.631780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.631816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.631975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.632011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.632178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.632211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.632371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.632404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.632567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.632600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.632737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.632771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.632936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.632970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.633134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.633166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.633329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.633362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.633525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.633558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.633720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.633756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.633923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.633958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.634093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.634126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.634293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.634326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.634511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.634543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.634681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.634713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.634883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.634916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.635077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.635112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.635282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.660 [2024-07-15 08:04:25.635316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.660 qpair failed and we were unable to recover it. 00:37:34.660 [2024-07-15 08:04:25.635507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.635541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.635704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.635737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.635905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.635939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.636138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.636172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.636335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.636373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.636522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.636570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.636762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.636798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.636963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.636997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.637139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.637171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.637335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.637368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.637572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.637619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.637766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.637801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.637968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.638001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.638178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.638212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.638353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.638387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.638544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.638577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.638713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.638748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.638927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.638975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.639183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.639231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.639412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.639448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.639608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.639642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.639777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.639810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.639978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.640012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.640186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.640222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.640386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.640419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.640579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.640613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.640805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.640838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.641042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.641089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.641249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.641287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.641496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.641533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.641715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.641748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.641931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.641968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.642108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.642144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.642300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.642336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.642531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.642565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.642700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.642733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.642928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.642962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.643109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.643142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.643329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.643363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.643541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.643575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.661 [2024-07-15 08:04:25.643721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.661 [2024-07-15 08:04:25.643757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.661 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.643924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.643957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.644132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.644168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.644313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.644348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.644553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.644591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.644756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.644789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.644955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.644989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.645167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.645223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.645433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.645469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.645630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.645664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.645827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.645860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.646057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.646091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.646281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.646315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.646484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.646518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.646692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.646729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.646942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.646977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.647125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.647161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.647368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.647402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.647543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.647577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.647765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.647803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.647980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.648014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.648166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.648200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.648384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.648418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.648554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.648588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.648744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.648778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.648944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.648978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.649113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.649146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.649328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.649375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.649564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.649599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.649770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.649807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.649987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.650023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.650206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.650254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.650397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.650432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.650595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.650629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.650792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.650824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.650986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.651020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.651171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.651204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.651363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.651395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.651583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.651615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.651779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.651812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.651952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.662 [2024-07-15 08:04:25.651985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.662 qpair failed and we were unable to recover it. 00:37:34.662 [2024-07-15 08:04:25.652141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.652174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.652348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.652384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.652611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.652644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.652833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.652867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.653047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.653081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.653274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.653308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.653474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.653508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.653646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.653680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.653886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.653919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.654064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.654098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.654269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.654303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.654447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.654480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.654608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.654641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.654806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.654840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.655066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.655114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.655277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.655314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.655520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.655554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.655731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.655766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.655920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.655954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.656112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.656146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.656342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.656375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.656542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.656575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.656777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.656825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.656988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.657026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.657217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.657251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.657437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.657471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.657638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.657686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.657850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.657891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.658060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.658094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.658268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.658303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.658465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.658503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.658702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.658738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.658908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.658942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.659137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.659171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.663 qpair failed and we were unable to recover it. 00:37:34.663 [2024-07-15 08:04:25.659351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.663 [2024-07-15 08:04:25.659385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.659514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.659547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.659699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.659746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.659925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.659962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.660169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.660216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.660392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.660442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.660630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.660664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.660828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.660861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.661005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.661038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.661219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.661268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.661448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.661484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.661620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.661654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.661810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.661843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.662018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.662051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.662212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.662245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.662419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.662453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.662623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.662656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.662926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.662961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.663119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.663156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.663356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.663391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.663550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.663596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.663759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.663793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.663928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.663961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.664179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.664229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.664390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.664426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.664629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.664664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.664829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.664863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.665019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.665052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.665212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.665246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.665394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.665427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.665628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.665661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.665819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.665852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.666019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.666053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.666217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.666250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.666408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.666441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.666643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.666678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.666873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.666917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.667045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.667079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.667272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.667305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.667441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.667474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.664 [2024-07-15 08:04:25.667608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.664 [2024-07-15 08:04:25.667641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.664 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.667812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.667845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.668004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.668052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.668217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.668252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.668428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.668465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.668602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.668637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.668892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.668940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.669105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.669141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.669279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.669315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.669491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.669526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.669688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.669721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.669940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.669974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.670142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.670176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.670312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.670345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.670512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.670546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.670715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.670749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.670884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.670917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.671082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.671115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.671286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.671321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.671489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.671525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.671721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.671755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.671905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.671942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.672110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.672144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.672324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.672357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.672519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.672552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.672725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.672761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.672921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.672956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.673165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.673212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.673376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.673413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.673602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.673636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.673773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.673806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.673972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.674006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.674174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.674207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.674340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.674377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.674571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.674603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.674800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.674834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.674985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.675028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.675162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.675196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.675346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.675393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.675600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.675636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.675827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.665 [2024-07-15 08:04:25.675860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.665 qpair failed and we were unable to recover it. 00:37:34.665 [2024-07-15 08:04:25.676019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.676052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.676216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.676250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.676394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.676427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.676597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.676633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.676774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.676807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.676959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.676994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.677165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.677198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.677364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.677397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.677550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.677597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.677808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.678006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.678053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.678223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.678258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.678454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.678487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.678657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.678691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.678825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.678858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.679055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.679102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.679282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.679317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.679484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.679517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.679677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.679710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.679899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.679934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.680089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.680121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.680289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.680323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.680466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.680500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.680665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.680698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.680843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.680884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.681035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.681072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.681241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.681274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.681408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.681441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.681570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.681604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.681735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.681768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.681976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.682011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.682179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.682212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.682349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.682382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.682512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.682545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.682687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.682721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.682917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.682956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.683123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.683156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.683338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.683372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.683510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.683543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.666 qpair failed and we were unable to recover it. 00:37:34.666 [2024-07-15 08:04:25.683700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.666 [2024-07-15 08:04:25.683733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.683908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.683942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.684075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.684109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.684287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.684322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.684493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.684526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.684691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.684724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.684892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.684926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.685066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.685099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.685287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.685320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.685483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.685515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.685749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.685784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.685926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.685960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.686130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.686163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.686348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.686381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.686522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.686556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.686717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.686751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.686913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.686947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.687087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.687121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.687323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.687357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.687494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.687528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.687688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.687721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.687860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.687904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.688105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.688139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.688332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.688377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.688566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.688613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.688816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.688853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.689034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.689069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.689236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.689270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.689423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.689456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.689596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.689629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.689826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.689859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.690029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.690076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.690236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.690283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.690456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.690492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.690649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.690683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.667 qpair failed and we were unable to recover it. 00:37:34.667 [2024-07-15 08:04:25.690851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.667 [2024-07-15 08:04:25.690893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.691027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.691067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.691296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.691329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.691492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.691525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.691661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.691695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.691862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.691902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.692062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.692096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.692235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.692269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.692440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.692478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.692641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.692674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.692863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.692903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.693058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.693092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.693283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.693317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.693481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.693515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.693717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.693754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.693931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.693965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.694149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.694206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.694356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.694391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.694620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.694655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.694826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.694871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.695050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.695083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.695246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.695279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.695420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.695454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.695660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.695694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.695905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.695953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.696109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.696145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.696351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.696385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.696526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.696560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.696705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.696739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.696908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.696941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.697103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.697136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.697303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.697336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.697485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.697524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.697573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.668 [2024-07-15 08:04:25.697617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.668 [2024-07-15 08:04:25.697642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.668 [2024-07-15 08:04:25.697661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.668 [2024-07-15 08:04:25.697680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.668 [2024-07-15 08:04:25.697812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:34.668 [2024-07-15 08:04:25.697874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.697840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:34.668 [2024-07-15 08:04:25.697859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:34.668 [2024-07-15 08:04:25.697925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.697868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:34.668 [2024-07-15 08:04:25.698085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.698286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.698321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.698505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.698539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.668 [2024-07-15 08:04:25.698693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.668 [2024-07-15 08:04:25.698725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.668 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.698882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.698920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.699121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.699169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.699314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.699349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.699522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.699556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.699719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.699752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.699916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.699950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.700116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.700149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.700284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.700317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.700449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.700482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.700649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.700682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.700855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.700895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.701073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.701107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.701282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.701315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.701478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.701516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.701678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.701712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.701860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.701903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.702051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.702084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.702240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.702289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.702538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.702574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.702719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.702753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.702915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.702949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.703078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.703111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.703307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.703340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.703547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.703593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.703737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.703771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.703956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.704005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.704157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.704193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.704368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.704402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.704565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.704598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.704731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.704764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.704942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.704990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.705138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.705175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.705350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.705385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.705565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.705598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.705757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.705791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.706036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.706083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.706271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.706308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.706538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.706572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.706739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.706772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.669 qpair failed and we were unable to recover it. 00:37:34.669 [2024-07-15 08:04:25.706917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.669 [2024-07-15 08:04:25.706951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.707105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.707152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.707322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.707358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.707532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.707566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.707745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.707779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.707946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.707980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.708123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.708156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.708310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.708343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.708534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.708568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.708706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.708740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.708891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.708925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.709097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.709131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.709303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.709336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.709499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.709535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.709723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.709762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.709926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.709960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.710092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.710125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.710292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.710326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.710459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.710492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.710695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.710731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.710868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.710922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.711094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.711128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.711271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.711305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.711475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.711509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.711650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.711683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.711861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.711905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.712050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.712083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.712213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.712247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.712417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.712451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.712642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.712676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.712815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.712850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.713000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.713036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.713197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.713231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.713371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.713406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.713555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.713589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.713756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.713806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.713981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.714029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.714188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.714225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.714405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.714439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.714581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.714615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.714754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.670 [2024-07-15 08:04:25.714788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.670 qpair failed and we were unable to recover it. 00:37:34.670 [2024-07-15 08:04:25.714955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.714994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.715137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.715172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.715320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.715357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.715528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.715564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.715737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.715771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.715918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.715952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.716093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.716126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.716315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.716348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.716488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.716521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.716674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.716707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.716875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.716913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.717059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.717092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.717261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.717295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.717447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.717495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.717690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.717727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.717892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.717958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.718097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.718130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.718293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.718326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.718492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.718525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.718694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.718728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.718908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.718942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.719101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.719134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.719319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.719352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.719547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.719580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.719785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.719839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.720064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.720100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.720259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.720292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.720432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.720466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.720629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.720662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.720817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.720865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.721017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.721052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.721188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.721221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.721389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.721423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.721554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.721588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.671 [2024-07-15 08:04:25.721798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.671 [2024-07-15 08:04:25.721846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.671 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.722005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.722041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.722199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.722232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.722391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.722424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.722572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.722606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.722755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.722802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.722960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.723000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.723188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.723235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.723416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.723586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.723620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.723759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.723792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.723924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.723958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.724101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.724134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.724268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.724301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.724462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.724495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.724636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.724670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.724869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.724911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.725081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.725115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.725251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.725284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.725440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.725473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.725630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.725665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.725814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.725873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.726057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.726093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.726237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.726270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.726444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.726478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.726637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.726670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.726849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.726914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.727062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.727246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.727280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.727440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.727473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.727609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.727642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.727800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.727833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.727985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.728018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.728183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.728221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.728382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.728417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.728560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.728593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.728754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.728787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.728938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.728972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.729114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.729149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.729303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.729336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.729498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.672 [2024-07-15 08:04:25.729531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.672 qpair failed and we were unable to recover it. 00:37:34.672 [2024-07-15 08:04:25.729684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.729717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.729870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.729918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.730055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.730088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.730241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.730275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.730418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.730461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.730602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.730639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.730810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.730842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.731009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.731042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.731197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.731391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.731427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.731609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.731644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.731824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.731858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.732037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.732210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.732244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.732405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.732439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.732600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.732633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.732773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.732806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.732977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.733012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.733155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.733189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.733348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.733395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.733553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.733589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.733732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.733765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.733906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.733940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.734110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.734143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.734271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.734304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.734459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.734492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.734710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.734748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.734920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.734956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.735092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.735128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.735337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.735372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.735510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.735543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.735675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.735709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.735853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.735894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.736059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.736094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.736245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.736279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.736409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.736442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.736609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.736643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.736817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.736853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.737029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.737063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.737224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.737257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.737420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.673 [2024-07-15 08:04:25.737453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.673 qpair failed and we were unable to recover it. 00:37:34.673 [2024-07-15 08:04:25.737610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.737642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.737773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.737805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.737994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.738028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.738169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.738205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.738375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.738413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.738579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.738613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.738746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.738779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.738964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.739011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.739157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.739193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.739358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.739391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.739582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.739616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.739760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.739793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.739954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.739988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.740125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.740159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.740318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.740352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.740497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.740531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.740699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.740735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.740874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.740914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.741062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.741096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.741252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.741285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.741448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.741482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.741650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.741683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.741848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.741891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.742054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.742088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.742227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.742260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.742429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.742462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.742625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.742673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.742828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.742864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.743039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.743072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.743220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.743253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.743384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.743419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.743630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.743678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.743824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.743859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.744027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.744074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.744230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.744266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.744453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.744486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.744649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.744682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.744848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.744888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.745064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.745097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.745260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.745293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.674 [2024-07-15 08:04:25.745455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.674 [2024-07-15 08:04:25.745489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.674 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.745624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.745657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.745829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.745862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.746021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.746068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.746222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.746263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.746406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.746442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.746638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.746671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.746816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.746850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.746992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.747026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.747187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.747219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.747360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.747392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.747556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.747592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.747752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.747787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.747927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.747961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.748096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.748129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.748305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.748339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.748478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.748522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.748669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.748703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.748895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.748929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.749091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.749125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.749289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.749322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.749458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.749493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.749666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.749713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.749895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.749931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.750093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.750127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.750301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.750334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.750493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.750525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.750701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.750749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.750897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.750933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.751106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.751144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.751292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.751327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.751474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.751508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.751653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.751688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.751836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.751870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.752018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.752052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.752219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.752253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.752388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.752577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.752751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.752784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.752918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.752953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.753086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.753119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.675 qpair failed and we were unable to recover it. 00:37:34.675 [2024-07-15 08:04:25.753280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.675 [2024-07-15 08:04:25.753313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.753472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.753505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.753690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.753724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.753869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.753913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.754063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.754097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.754253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.754287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.754450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.754483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.754615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.754648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.754836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.754869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.755018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.755051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.755208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.755240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.755391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.755425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.755597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.755630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.755772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.755806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.755954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.755987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.756153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.756187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.756365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.756412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.756587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.756622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.756764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.756800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.756979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.757014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.757152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.757186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.757319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.757352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.757486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.757519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.757681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.757714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.757905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.757939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.758100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.758134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.758289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.758336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.758507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.758542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.758684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.758717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.758848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.758887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.759025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.759058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.759232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.759269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.759435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.759468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.759603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.759636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.759808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.759841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.676 [2024-07-15 08:04:25.759990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.676 [2024-07-15 08:04:25.760024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.676 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.760215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.760248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.760430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.760464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.760660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.760694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.760853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.760895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.761033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.761065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.761195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.761227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.761423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.761455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.761596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.761647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.761810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.761843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.762021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.762055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.762208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.762254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.762431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.762467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.762602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.762635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.762828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.762861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.763006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.763039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.763183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.763216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.763402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.763439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.763577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.763611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.763775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.763808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.763962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.763995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.764162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.764194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.764348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.764381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.764545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.764581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.764724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.764758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.764914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.764948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.765089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.765124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.765282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.765315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.765481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.765513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.765668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.765704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.765838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.765872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.766051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.766084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.766249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.766281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.766414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.766447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.766603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.766636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.766805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.766837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.766988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.767021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.767180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.767214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.767349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.767383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.767545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.767577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.767737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.677 [2024-07-15 08:04:25.767783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.677 qpair failed and we were unable to recover it. 00:37:34.677 [2024-07-15 08:04:25.767948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.767982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.768126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.768158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.768326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.768358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.768496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.768532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.768660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.768694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.768823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.768856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.769033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.769068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.769230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.769268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.769417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.769450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.769584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.769617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.769778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.769812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.769987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.770020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.770156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.770188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.770320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.770354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.770491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.770525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.770688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.770722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.770883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.770916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.771050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.771082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.771236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.771270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.771428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.771460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.771590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.771622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.771782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.771818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.771988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.772036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.772198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.772247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.772393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.772428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.772605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.772640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.772769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.772801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.772975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.773008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.773169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.773203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.773387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.773420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.773556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.773588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.773732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.773768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.773937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.773971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.774160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.774194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.774375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.774410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.774577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.774610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.774772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.774805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.774980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.775012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.775149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.775182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.678 [2024-07-15 08:04:25.775314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.678 [2024-07-15 08:04:25.775347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.678 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.775520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.775552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.775700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.775732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.775901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.775935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.776074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.776107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.776242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.776275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.776435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.776467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.776634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.776667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.776799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.776835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.776988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.777021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.777203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.777237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.777370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.777403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.777536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.777570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.777730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.777763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.777905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.777939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.778103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.778137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.778300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.778337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.778533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.778580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.778719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.778752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.778946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.778980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.779165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.779198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.779343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.779377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.779530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.779564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.779724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.779757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.779907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.779941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.780075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.780108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.780274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.780325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.780472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.780507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.780652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.780689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.780858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.780898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.781036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.781069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.781209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.781242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.781409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.781443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.781673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.781706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.781867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.781919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.782066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.782102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.782311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.782361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.782536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.782572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.782726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.782762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.782901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.782944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.679 [2024-07-15 08:04:25.783114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.679 [2024-07-15 08:04:25.783146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.679 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.783280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.783313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.783482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.783517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.783682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.783715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.783852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.783893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.784036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.784069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.784295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.784343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.784509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.784544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.784691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.784734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.784936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.784970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.785125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.785158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.785319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.785352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.785517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.785551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.785685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.785718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.785891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.785927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.786099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.786134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.786306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.786340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.786510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.786557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.786725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.786897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.786930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.787092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.787125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.787255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.787287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.787425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.787458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.787623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.787657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.787790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.787822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.787980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.788028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.788180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.788216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.788436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.788485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.788634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.788670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.788836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.788871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.789031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.789065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.789205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.789237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.789398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.789432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.789597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.789631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.789762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.789795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.680 [2024-07-15 08:04:25.789953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.680 [2024-07-15 08:04:25.790001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.680 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.790178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.790214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.790368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.790416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.790578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.790613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.790777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.791026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.791060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.791193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.791227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.791396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.791428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.791606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.791640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.791804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.791837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.792051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.792099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.792246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.792281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.792478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.792511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.792652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.792692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.792856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.792896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.793039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.793072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.793224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.793256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.793415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.793449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.793651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.793683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.793818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.793850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.794058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.794105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.794255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.794292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.794435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.794471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.794608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.794641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.794794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.794842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.795027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.795062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.795203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.795237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.795431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.795464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.795599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.795633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.795805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.795841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.796018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.796052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.796206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.796239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.796370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.796403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.796553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.796587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.796750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.796783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.796936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.796985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.797135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.797171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.797350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.797384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.797525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.797558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.797706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.797739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.797962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.681 [2024-07-15 08:04:25.798010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.681 qpair failed and we were unable to recover it. 00:37:34.681 [2024-07-15 08:04:25.798166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.798201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.798354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.798389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.798527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.798560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.798754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.798787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.798924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.798967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.799139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.799174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.799351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.799385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.799543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.799577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.799714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.799747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.799918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.799954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.800089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.800122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.800286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.800319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.800520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.800559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.800739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.800773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.800922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.801118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.801166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.801321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.801356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.801503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.801539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.801690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.801724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.801897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.801931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.802064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.802097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.802236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.802269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.802431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.802465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.802615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.802650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.802828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.802863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.803085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.803132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.803316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.803351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.803517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.803550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.803787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.803820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.803969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.804003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.804163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.804218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.804373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.804409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.804549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.804583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.804748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.804946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.804980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.805139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.805172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.805336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.805370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.805527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.805560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.805692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.805724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.805893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.805930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.682 [2024-07-15 08:04:25.806075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.682 [2024-07-15 08:04:25.806107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.682 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.806273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.806306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.806450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.806483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.806666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.806714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.806871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.806912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.807063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.807099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.807259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.807294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.807453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.807486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.807674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.807706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.807882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.807916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.808080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.808113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.808287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.808322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.808495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.808535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.808692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.808726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.808863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.808914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.809128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.809162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.809305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.809351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.809514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.809548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.809705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.809738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.809874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.809931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.810101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.810136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.810277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.810310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.810452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.810622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.810655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.810840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.810874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.811049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.811096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.811281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.811317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.811455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.811488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.811633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.811666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.811824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.811857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.812012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.812045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.812209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.812242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.812423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.812457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.812644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.812692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.812836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.812872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.813045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.813093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.813276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.813311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.813470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.813503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.813663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.813697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.813836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.813874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.814050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.683 [2024-07-15 08:04:25.814083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.683 qpair failed and we were unable to recover it. 00:37:34.683 [2024-07-15 08:04:25.814249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.814282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.814446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.814480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.814631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.814864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.814908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.815074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.815122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.815322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.815357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.815516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.815550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.815696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.815730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.815859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.815898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.816106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.816153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.816327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.816363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.816531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.816565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.816736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.816772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.816936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.816980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.817123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.817156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.817301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.817336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.817516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.817550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.817694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.817727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.817866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.817915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.818089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.818136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.818295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.818330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.818502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.818536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.818690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.818723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.818901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.818946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.819089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.819122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.819265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.819299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.819432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.819464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.819608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.819641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.819794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.819842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.820021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.820068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.820244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.820279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.820410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.820443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.820637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.820670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.820812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.820844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.821032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.821079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.821244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.684 [2024-07-15 08:04:25.821291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.684 qpair failed and we were unable to recover it. 00:37:34.684 [2024-07-15 08:04:25.821435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.821471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.821634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.821668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.821819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.821862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.822069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.822102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.822282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.822318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.822481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.822514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.822687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.822720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.822859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.822902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.823064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.823112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.823269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.823305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.823453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.823489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.823631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.823664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.823832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.823865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.824019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.824053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.824211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.824245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.824406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.824440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.824609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.824642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.824834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.824867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.825028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.825061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.825223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.825256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.825398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.825432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.825564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.825597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.825783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.825816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.825979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.826014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.826190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.826224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.826356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.826389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.826524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.826558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.826708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.826764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.826929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.826977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.827163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.827199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.827340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.827374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.827568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.827601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.827762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.827795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.827961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.827994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.828141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.828174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.828351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.828387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.828538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.828574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.828742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.828775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.828942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.829114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.685 [2024-07-15 08:04:25.829146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.685 qpair failed and we were unable to recover it. 00:37:34.685 [2024-07-15 08:04:25.829346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.829380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.829515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.829549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.829693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.829731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.829866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.829905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.830077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.830111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.830247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.830280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.830469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.830501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.830657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.830690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.830842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.830884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.831054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.831088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.831265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.831297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.831484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.831516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.831687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.831719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.831885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.831927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.832077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.832110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.832294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.832341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.832521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.832557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.832726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.832760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.832905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.832942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.833074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.833106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.833278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.833312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.833480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.833513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.833669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.833702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.833852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.833911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.834060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.834096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.834269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.834303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.834443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.834475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.834618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.834652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.834836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.834869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.835056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.835090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.835221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.835255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.835450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.835484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.835636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.835680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.835815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.835849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.836017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.836065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.836244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.836279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.836447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.836481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.836641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.836674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.836868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.836911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.837065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.837113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.686 qpair failed and we were unable to recover it. 00:37:34.686 [2024-07-15 08:04:25.837307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.686 [2024-07-15 08:04:25.837343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.837493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.837541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.837691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.837732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.837926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.837960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.838098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.838131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.838293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.838326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.838492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.838524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.838715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.838748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.838892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.838925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.839080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.839127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.839277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.839313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.839456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.839489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.839656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.839689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.839852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.839891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.840035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.840067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.840206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.840240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.840406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.840439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.840605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.840637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.840796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.840829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.840991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.841039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.841218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.841265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.841410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.841445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.841614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.841647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.841823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.841856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.841994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.842027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.842170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.842203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.842338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.842370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.842522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.842559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.842699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.842735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.842873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.842913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.843083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.843116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.843277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.843310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.843468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.843502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.843669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.843703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.843832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.843864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.844012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.844045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.844215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.844248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.844411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.844443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.844601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.844634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.844776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.844810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.687 [2024-07-15 08:04:25.845025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.687 [2024-07-15 08:04:25.845072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.687 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.845218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.845253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.845402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.845444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.845600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.845634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.845815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.845862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.846023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.846058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.846232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.846268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.846431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.846465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.846602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.846635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.846797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.846829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.846997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.847031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.847179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.847226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.847375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.847412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.847558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.847594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.847767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.847800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.847948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.847982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.848155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.848188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.848357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.848391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.848547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.848580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.848718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.848753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.848903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.848945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.849136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.849170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.849311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.849344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.849474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.849506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.849676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.849710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.849843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.849881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.850041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.850073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.688 [2024-07-15 08:04:25.850203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.688 [2024-07-15 08:04:25.850236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.688 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.850403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.850437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.850632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.850667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.850805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.850839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.851011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.851045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.851204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.851252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.851393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.851428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.851581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.851617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.851760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.851794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.851979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.852013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.852173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.852208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.852394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.852430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.852592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.970 [2024-07-15 08:04:25.852625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.970 qpair failed and we were unable to recover it. 00:37:34.970 [2024-07-15 08:04:25.852766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.852799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.852984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.853018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.853179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.853216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.853387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.853421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.853556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.853589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.853751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.853784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.853934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.853982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.854131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.854166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.854302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.854335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.854494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.854527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.854688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.854722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.854894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.854928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.855063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.855097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.855303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.855341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.855472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.855505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.855656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.855691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.855862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.855902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.856116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.856150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.856311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.856343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.856514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.856546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.856703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.856750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.856943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.856979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.857130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.857166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.857305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.857340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.857545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.857579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.857767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.857811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.857961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.857997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.858145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.858193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.858339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.858374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.858519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.858566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.858709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.858742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.858954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.859002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.859158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.859194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.859345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.859378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.859525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.859558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.859691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.859724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.859888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-07-15 08:04:25.859922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-07-15 08:04:25.860066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.860101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.860244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.860281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.860460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.860507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.860682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.860718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.860852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.860891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.861027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.861065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.861207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.861240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.861410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.861443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.861601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.861634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.861768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.861800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.861969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.862002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.862152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.862199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.862363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.862403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.862594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.862629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.862789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.862822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.862963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.862996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.863166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.863199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.863343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.863377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.863548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.863581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.863733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.863770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.863923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.863959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.864113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.864288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.864320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.864487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.864521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.864655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.864689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.864895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.864943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.865092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.865127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.865269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.865302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.865469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.865503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.865650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.865683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.865814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.865847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.866036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.866072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.866251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.866288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.866455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.866488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.866631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.866664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.866824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.866858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.867026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.867059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.867223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.867255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-07-15 08:04:25.867386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-07-15 08:04:25.867418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.867580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.867613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.867761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.867795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.867956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.867990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.868122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.868155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.868313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.868345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.868487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.868520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.868660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.868698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.868860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.868899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.869036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.869068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.869236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.869268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.869403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.869436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.869579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.869612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.869795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.869827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.869978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.870014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.870185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.870219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.870384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.870417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.870553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.870585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.870743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.870776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.870938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.870986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.871152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.871186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.871343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.871402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.871583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.871620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.871778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.871811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.872000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.872034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.872197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.872230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.872384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.872417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.872577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.872610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.872750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.872784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.872947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.872981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.873109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.873143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.873282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.873315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.873455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.873488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.873649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.873682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.873857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.873899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.874078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.874126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.874295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.874329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-07-15 08:04:25.874500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-07-15 08:04:25.874534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.874698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.874733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.874885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.874919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.875067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.875104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.875271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.875305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.875433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.875465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.875627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.875661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.875835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.875870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.876044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.876091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.876245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.876280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.876444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.876486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.876616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.876655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.876836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.876892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.877055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.877091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.877229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.877263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.877428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.877461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.877596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.877629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.877791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.877824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.877977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.878025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.878177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.878211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.878361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.878396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.878565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.878598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.878741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.878774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.878922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.878956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.879114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.879147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.879290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.879324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.879492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.879527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.879671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.879707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.879868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.879908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.880044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.880076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.880236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.880269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.880429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.880462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.880630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.880663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.880803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.880836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.880998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.881030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.881197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.881231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.881389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.881422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.881592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.881625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-07-15 08:04:25.881760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-07-15 08:04:25.881795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.881949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.881997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.882175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.882210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.882350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.882386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.882577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.882610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.882768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.882800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.882934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.882969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.883109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.883141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.883297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.883330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.883519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.883551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.883691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.883725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.883858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.883895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.884073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.884126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.884297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.884333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.884523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.884557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.884705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.884740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.884892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.884927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.885057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.885090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.885245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.885277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.885411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.885444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.885586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.885618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.885756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.885789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.885972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.886020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.886202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.886248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.886407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.886443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.886610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.886643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.886779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.886812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.886954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.886988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.887123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.887157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.887342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.887390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.887534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.887569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.887722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-07-15 08:04:25.887758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-07-15 08:04:25.887900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.887934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.888088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.888120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.888245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.888278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.888440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.888473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.888626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.888659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.888847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.888885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.889029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.889064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.889216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.889263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.889400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.889434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.889586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.889621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.889760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.889793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.889928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.889962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.890098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.890130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.890292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.890325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.890462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.890495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.890635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.890671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.890820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.890856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.890999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.891032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.891192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.891225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.891357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.891389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.891568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.891620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.891782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.891818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.891975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.892023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.892170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.892205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.892363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.892396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.892558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.892592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.892727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.892759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.892924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.892961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.893117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.893154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.893286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.893331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.893496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.893529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.893660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.893693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.893847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.894059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.894092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.894259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.894293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.894455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.894503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.894644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.894680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.894822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.894855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.976 qpair failed and we were unable to recover it. 00:37:34.976 [2024-07-15 08:04:25.895033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.976 [2024-07-15 08:04:25.895068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.895254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.895288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.895419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.895452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.895614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.895648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.895788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.895836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.896007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.896054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.896206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.896241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.896416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.896449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.896578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.896610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.896746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.896781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.896926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.896961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.897087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.897120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.897271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.897304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.897491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.897524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.897680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.897712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.897849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.897890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.898033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.898069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.898257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.898304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.898451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.898485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.898629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.898663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.898808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.898841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.898985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.899018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.899156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.899194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.899352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.899386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.899529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.899562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.899742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.899775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.899938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.899974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.900156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.900203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.900391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.900427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.900567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.900601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.900746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.900779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.900913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.900947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.901088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.901122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.901260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.901293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.901454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.901488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.901653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.901687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.901826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.901859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.902019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.977 [2024-07-15 08:04:25.902056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.977 qpair failed and we were unable to recover it. 00:37:34.977 [2024-07-15 08:04:25.902245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.902279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.902443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.902479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.902666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.902700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.902868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.902916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.903077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.903110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.903300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.903333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.903491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.903525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.903661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.903695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.903870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.903914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.904048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.904081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.904282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.904316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.904470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.904517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.904672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.904706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.904903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.904950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.905113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.905146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.905282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.905314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.905503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.905539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.905702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.905735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.905908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.905942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.906076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.906110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.906257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.906290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.906423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.906456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.906598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.906633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.906809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.906842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.907012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.907050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.907196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.907228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.907388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.907421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.907582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.907615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.907747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.907780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.907936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.907970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.908101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.908134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.908271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.908303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.908444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.908512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.908649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.908682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.908844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.908886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.909045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.909078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.909246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.978 [2024-07-15 08:04:25.909294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.978 qpair failed and we were unable to recover it. 00:37:34.978 [2024-07-15 08:04:25.909445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.909481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.909623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.909658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.909794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.909829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.909992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.910026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.910192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.910225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.910383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.910416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.910550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.910583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.910742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.910775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.910933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.910967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.911130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.911176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.911320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.911356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.911494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.911528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.911706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.911740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.911912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.911948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.912145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.912180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.912321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.912354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.912507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.912540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.912713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.912746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.912905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.912939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.913104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.913152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.913308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.913344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.913528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.913562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.913724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.913757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.913892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.913932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.914068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.914102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.914243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.914275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.914450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.914483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.914625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.914659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.914811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.914844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.915042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.915090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.915253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.979 [2024-07-15 08:04:25.915288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.979 qpair failed and we were unable to recover it. 00:37:34.979 [2024-07-15 08:04:25.915453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.915489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.915626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.915659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.915797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.915833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.915973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.916008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.916145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.916178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.916330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.916364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.916490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.916532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.916729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.916763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.916933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.916966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.917135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.917177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.917333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.917369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.917536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.917569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.917728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.917761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.917899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.917937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.918068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.918100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.918258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.918291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.918469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.918503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.918684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.918732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.918888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.918932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.919096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.919129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.919274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.919307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.919452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.919494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.919684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.919717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.919910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.919958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.920172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.920205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.920372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.920412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.920573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.920606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.920770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.920801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.920968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.921002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.921175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.921209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.921372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.921404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.921567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.921600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.921739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.921772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.921938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.921971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.922131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.922189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.922411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.922446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.922581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.922616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.922788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.980 [2024-07-15 08:04:25.922823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.980 qpair failed and we were unable to recover it. 00:37:34.980 [2024-07-15 08:04:25.922964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.923159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.923197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.923362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.923394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.923558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.923591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.923753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.923801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.923958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.923994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.924146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.924199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.924367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.924402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.924541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.924574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.924763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.924797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.924960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.925005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.925142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.925176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.925347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.925381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.925515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.925548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.925689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.925722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.925903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.925940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.926120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.926167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.926314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.926363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.926548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.926584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.926757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.926790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.926939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.926987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.927157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.927194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.927337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.927373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.927541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.927575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.927711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.927745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.927912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.927961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.928111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.928144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.928315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.928348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.928508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.928540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.928673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.928707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.928874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.928923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.929087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.929119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.929283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.929316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.929482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.929515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.929707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.929740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.929927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.929962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.930180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.930227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.930384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.981 [2024-07-15 08:04:25.930420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.981 qpair failed and we were unable to recover it. 00:37:34.981 [2024-07-15 08:04:25.930568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.930605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.930815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.930850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.931051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.931099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.931284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.931320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.931495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.931530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.931701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.931735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.931910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.931950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.932084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.932117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.932330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.932363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.932525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.932558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.932721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.932754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.932898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.932935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.933110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.933157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.933359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.933393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.933542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.933579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.933746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.933780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.933926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.933961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.934126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.934158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.934304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.934337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.934498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.934531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.934734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.934767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.934916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.934952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.935086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.935119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.935301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.935334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.935506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.935539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.935698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.935732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.935897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.935951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.936105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.936147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.936314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.936347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.936519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.936552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.936756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.936788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.936996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.937045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.937193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.937236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.937409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.937443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.937609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.937643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.937806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.937839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.938045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.938093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.982 [2024-07-15 08:04:25.938244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.982 [2024-07-15 08:04:25.938280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.982 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.938424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.938458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.938629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.938662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.938825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.938858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.939005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.939039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.939228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.939262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.939402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.939436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.939626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.939659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.939812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.939845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.940039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.940086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.940249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.940284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.940462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.940498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.940670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.940704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.940870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.940913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.941062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.941095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.941271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.941305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.941494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.941528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.941673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.941709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.941935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.941971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.942138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.942193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.942374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.942409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.942577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.942611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.942773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.942806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.942986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.943021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.943182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.943229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.943414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.943449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.943592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.943625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.943765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.943798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.943977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.944011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.944143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.944185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.944338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.944378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.944522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.944556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.944714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.944760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.944905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.944947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.945114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.945147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.945301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.945333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.945503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.945537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.945716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.945763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.945929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.983 [2024-07-15 08:04:25.945964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.983 qpair failed and we were unable to recover it. 00:37:34.983 [2024-07-15 08:04:25.946104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.946137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.946293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.946327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.946465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.946498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.946686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.946719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.946888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.946922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.947070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.947103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.947249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.947282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.947427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.947461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.947604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.947637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.947774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.947807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.947952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.947987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.948182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.948216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.948372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.948405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.948533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.948566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.948706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.948739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.948882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.948928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.949071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.949106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.949286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.949319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.949484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.949518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.949663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.949700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.949904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.949947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.950091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.950124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.950298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.950331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.950512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.950546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.950706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.950739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.950906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.950946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.984 [2024-07-15 08:04:25.951082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.984 [2024-07-15 08:04:25.951115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.984 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.951259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.951292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.951470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.951504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.951701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.951734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.951867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.951906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.952059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.952097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.952256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.952289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.952433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.952466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.952633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.952669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.952802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.952835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.953062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.953097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.953240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.953273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.953459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.953492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.953631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.953664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.953804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.953839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.953991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.954024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.954187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.954220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.954404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.954438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.954582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.954616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.954754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.954787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.954978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.955012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.955185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.955233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.955409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.955445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.955582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.955616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.955803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.955836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.956006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.956040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.956198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.956244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.956388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.956421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.956551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.956585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.956763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.956796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.956947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.956994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.957131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.957171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.957337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.957372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.957501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.957534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.957697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.957729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.957861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.957900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.958060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.958094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.958233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.958265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.958424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.985 [2024-07-15 08:04:25.958456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.985 qpair failed and we were unable to recover it. 00:37:34.985 [2024-07-15 08:04:25.958592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.958625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.958780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.958812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.958985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.959033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.959181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.959226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.959393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.959427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.959589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.959622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.959791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.959831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.959991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.960024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.960190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.960226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.960392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.960428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.960561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.960595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.960785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.960820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.960974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.961019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.961162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.961196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.961342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.961375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.961540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.961573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.961733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.961766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.961926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.961974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.962146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.962190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.962334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.962368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.962534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.962567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.962697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.962729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.962889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.962931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.963078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.963111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.963299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.963335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.963500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.963534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.963696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.963732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.963868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.963919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.964058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.964091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.964230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.964262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.964397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.964431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.964617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.964650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.964810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.964843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.964997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.965033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.965186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.965220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.965383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.965416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.965550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.965583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.965741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.986 [2024-07-15 08:04:25.965774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.986 qpair failed and we were unable to recover it. 00:37:34.986 [2024-07-15 08:04:25.965936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.965983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.966179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.966215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.966373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.966406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.966569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.966601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.966735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.966769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.966925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.966959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.967125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.967160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.967353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.967400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.967547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.967587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.967736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.967771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.967922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.967956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.968137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.968170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.968333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.968529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.968561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.968691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.968724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.968888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.968944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.969094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.969130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.969287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.969320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.969482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.969515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.969667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.969700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.969891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.969925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.970063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.970097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.970253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.970290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.970428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.970461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.970596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.970629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.970788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.970821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.970957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.970990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.971128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.971161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.971295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.971330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.971515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.971548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.971683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.971715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.971891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.971925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.972057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.972100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.972245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.972281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.972418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.972452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.972648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.972695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.972898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.972942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.987 qpair failed and we were unable to recover it. 00:37:34.987 [2024-07-15 08:04:25.973074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.987 [2024-07-15 08:04:25.973108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.973292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.973325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.973456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.973490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.973627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.973660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.973848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.973889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.974022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.974055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.974243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.974276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.974410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.974443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.974599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.974631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.974789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.974822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.974992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.975026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.975190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.975243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.975420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.975457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.975617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.975651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.975815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.975849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.975982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.976015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.976200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.976248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.976403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.976439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.976610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.976643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.976798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.976832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.977017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.977051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.977212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.977245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.977385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.977419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.977554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.977588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.977728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.977761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.977962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.977998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.978140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.978176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.978316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.978349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.978496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.978528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.978687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.978719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.978845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.978888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.979042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.979077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.979210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.979243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.979378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.979410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.979574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.979607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.979744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.979778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.979940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.979973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.980139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.980181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.980359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.988 [2024-07-15 08:04:25.980395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.988 qpair failed and we were unable to recover it. 00:37:34.988 [2024-07-15 08:04:25.980540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.980573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.980718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.980751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.980934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.980969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.981121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.981168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.981315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.981351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.981517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.981550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.981708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.981740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.981903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.981946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.982107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.982140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.982292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.982328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.982515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.982548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.982677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.982710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.982845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.982891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.983059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.983092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.983246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.983292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.983435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.983470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.983606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.983640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.983772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.983976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.984009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.984142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.984175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.984309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.984341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.984488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.984524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.984677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.984724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.984909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.984948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.985125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.985159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.985328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.985361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.985497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.985529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.985662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.985695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.985901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.985976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.986156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.986204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.986361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.986397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.986565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.989 [2024-07-15 08:04:25.986598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.989 qpair failed and we were unable to recover it. 00:37:34.989 [2024-07-15 08:04:25.986739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.986772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.986924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.986957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.987096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.987128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.987268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.987301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.987433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.987466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.987595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.987628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.987798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.987845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.988034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.988082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.988234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.988270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.988435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.988469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.988627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.988660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.988792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.988824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.988984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.989026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.989205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.989252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.989425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.989460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.989621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.989656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.989822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.989855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.990061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.990094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.990239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.990273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.990479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.990513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.990647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.990685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.990850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.990892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.991040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.991076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.991224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.991271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.991411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.991446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.991609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.991642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.991829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.991868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.992040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.992074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.992252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.992285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.992469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.992502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.992638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.992671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.992870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.992925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.993077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.993113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.993277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.993310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.993486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.993519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.993675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.993708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.993852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.993898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.994061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.994095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.994264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.994297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.990 qpair failed and we were unable to recover it. 00:37:34.990 [2024-07-15 08:04:25.994475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.990 [2024-07-15 08:04:25.994522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.994664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.994699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.994892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.994926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.995059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.995092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.995224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.995257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.995418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.995451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.995612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.995645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.995777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.995811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.995965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.996013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.996167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.996204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.996339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.996373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.996534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.996568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.996700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.996732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.996896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.996946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.997133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.997169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.997318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.997365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.997509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.997544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.997709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.997742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.997906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.997940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.998112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.998146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.998282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.998327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.998481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.998519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.998704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.998737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.998913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.998962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.999117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.999152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.999306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.999353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.999539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.999574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.999709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.999743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:25.999892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:25.999926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.000065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.000098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.000344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.000377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.000546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.000579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.000716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.000749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.000919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.000953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.001084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.001117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.001259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.001292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.001421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.001454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.001598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.991 [2024-07-15 08:04:26.001632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.991 qpair failed and we were unable to recover it. 00:37:34.991 [2024-07-15 08:04:26.001820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.001853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.002022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.002055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.002215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.002249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.002390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.002435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.002589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.002647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.002803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.002840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.003039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.003073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.003244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.003294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.003481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.003515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.003653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.003686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.003832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.003868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.004028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.004063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.004193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.004227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.004393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.004426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.004558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.004592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.004724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.004759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.004935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.004969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.005131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.005165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.005312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.005345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.005513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.005546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.005694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.005727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.005862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.005904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.006045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.006078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.006266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.006319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.006472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.006508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.006658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.006694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.006832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.006866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.007020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.007053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.007212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.007247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.007391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.007424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.007612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.007659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.007810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.007844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.008024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.008060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.008195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.008228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.008369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.008405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.008565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.008599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.008733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.008767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.992 qpair failed and we were unable to recover it. 00:37:34.992 [2024-07-15 08:04:26.008936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.992 [2024-07-15 08:04:26.008971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.009100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.009133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.009290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.009326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.009490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.009523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.009675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.009722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.009897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.009938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.010187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.010223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.010362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.010396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.010560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.010593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.010720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.010753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.010909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.010946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.011106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.011138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.011314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.011347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.011516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.011551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.011714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.011748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.011919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.011968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.012134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.012176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.012314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.012347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.012484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.012517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.012681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.012713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.012869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.012938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.013094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.013128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.013300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.013333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.013469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.013503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.013660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.013693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.013838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.013894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.014049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.014086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.014243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.014291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.014440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.014476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.014633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.014666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.014808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.014842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.015022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.015056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.993 [2024-07-15 08:04:26.015232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.993 [2024-07-15 08:04:26.015280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.993 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.015433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.015469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.015659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.015692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.015847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.015887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.016028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.016062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.016195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.016228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.016391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.016425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.016555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.016588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.016742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.016791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.016983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.017020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.017185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.017233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.017379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.017415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.017547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.017580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.017714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.017748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.017919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.017953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.018087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.018120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.018254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.018288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.018417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.018451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.018606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.018639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.018788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.018822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.018981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.019030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.019185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.019230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.019409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.019444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.019615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.019660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.019789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.019822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.019979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.020015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.020193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.020227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.020357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.020391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.020537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.020573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.020740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.020773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.020971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.021006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.021151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.021190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.021353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.021386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.021567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.021615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.021764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.021799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.021986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.022023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.022219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.022254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.022412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.022457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.994 qpair failed and we were unable to recover it. 00:37:34.994 [2024-07-15 08:04:26.022602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.994 [2024-07-15 08:04:26.022635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.022783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.022816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.022986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.023020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.023154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.023187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.023377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.023411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.023574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.023608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.023750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.023784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.023952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.023986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.024131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.024166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.024355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.024390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.024553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.024589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.024788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.024821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.024968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.025002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.025138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.025171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.025300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.025334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.025468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.025502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.025663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.025699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.025836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.025869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.026025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.026058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.026186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.026219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.026361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.026394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.026530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.026564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.026740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.026773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.026944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.026983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.027148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.027182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.027318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.027351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.027511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.027544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.027701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.027734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.027866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.027908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.028055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.028088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.028251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.028284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.028443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.028476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.028650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.028683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.028844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.028883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.029083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.029116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.029252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.029285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.029419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.029452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.029630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.029662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.995 [2024-07-15 08:04:26.029806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.995 [2024-07-15 08:04:26.029840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.995 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.029995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.030029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.030188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.030221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.030375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.030408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.030541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.030575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.030706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.030740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.030875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.030915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.031060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.031094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.031222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.031255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.031415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.031449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.031592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.031625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.031784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.031816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.031977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.032011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.032189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.032223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.032397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.032430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.032563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.032595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.032759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.032792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.032942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.032977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.033156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.033203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.033406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.033445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.033613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.033648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.033815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.033849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.033994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.034028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.034193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.034227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.034387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.034421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.034580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.034618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.034782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.034816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.034983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.035017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.035199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.035246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.035430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.035478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.035650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.035685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.035836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.035871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.036031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.036065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.036199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.036232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.036422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.036455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.036582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.036616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.036816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.036864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.037063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.037101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.037283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.996 [2024-07-15 08:04:26.037330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.996 qpair failed and we were unable to recover it. 00:37:34.996 [2024-07-15 08:04:26.037491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.037527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.037719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.037752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.037892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.037934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.038097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.038130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.038297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.038335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.038506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.038542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.038682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.038715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.038887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.038921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.039067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.039102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.039262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.039296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.039459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.039492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.039666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.039700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.039890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.039938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.040095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.040130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.040287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.040335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.040483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.040518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.040705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.040738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.040910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.040944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.041107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.041140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.041307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.041343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.041491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.041526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.041661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.041694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.041835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.041868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.042018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.042051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.042210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.042244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.042408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.042442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.042603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.042641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.042819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.042866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.043032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.043068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.043210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.043243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.043382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.043415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.043550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.043583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.043712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.043745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.043894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.043930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.044093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.044126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.044259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.044294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.044456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.997 [2024-07-15 08:04:26.044489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.997 qpair failed and we were unable to recover it. 00:37:34.997 [2024-07-15 08:04:26.044660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.044697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.044839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.044872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.045024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.045060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.045214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.045248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.045393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.045425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.045587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.045620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.045785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.045819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.046005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.046053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.046208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.046244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.046415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.046451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.046593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.046628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.046766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.046800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.046967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.047002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.047144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.047180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.047343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.047376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.047519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.047552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.047691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.047726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.047881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.047930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.048089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.048124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.048268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.048303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.048455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.048489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.048622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.048656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.048820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.048853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.049018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.049053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.049221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.049255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.049400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.049433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.049605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.049639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.049789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.049836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.049985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.050021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.050188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.050228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.050412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.050460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.050645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.998 [2024-07-15 08:04:26.050678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.998 qpair failed and we were unable to recover it. 00:37:34.998 [2024-07-15 08:04:26.050841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.050874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.051045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.051079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.051214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.051247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.051414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.051448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.051600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.051633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.051785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.051832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.051995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.052031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.052190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.052237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.052421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.052456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.052588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.052622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.052781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.052814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.052986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.053020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.053206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.053239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.053375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.053409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.053569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.053602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.053750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.053797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.053948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.053989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.054169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.054217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.054375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.054410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.054571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.054605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.054747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.054780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.054942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.054976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.055158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.055192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.055383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.055416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.055585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.055618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.055783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.055816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.055969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.056003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.056193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.056226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.056406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.056453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.056629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.056665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.056827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.056861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.057064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.057098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.057268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.057301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.057460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.057493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.057657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.057691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.057827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.057860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.058032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.999 [2024-07-15 08:04:26.058079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.999 qpair failed and we were unable to recover it. 00:37:34.999 [2024-07-15 08:04:26.058242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.058283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.058434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.058468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.058611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.058645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.058810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.058844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.059001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.059049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.059199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.059235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.059416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.059454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.059622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.059658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.059793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.059826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.059992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.060026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.060180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.060213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.060399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.060446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.060595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.060630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.060771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.060805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.060974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.061009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.061144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.061177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.061332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.061366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.061505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.061538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.061676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.061710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.061866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.061938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.062092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.062128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.062307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.062354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.062509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.062544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.062722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.062756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.062900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.062945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.063095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.063128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.063293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.063326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.063521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.063554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.063684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.063719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.063897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.063945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.064093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.064129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.064305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.064337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.064473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.064506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.064665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.064699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.064847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.064911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.065063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.065098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.065256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.065304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.065453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.065488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.065626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.000 [2024-07-15 08:04:26.065660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.000 qpair failed and we were unable to recover it. 00:37:35.000 [2024-07-15 08:04:26.065811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.065845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.065994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.066037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.066210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.066265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.066410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.066445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.066606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.066783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.066817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.066975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.067009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.067172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.067207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.067360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.067407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.067560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.067595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.067728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.067762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.067908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.067944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.068122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.068170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.068356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.068392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.068557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.068590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.068727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.068760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.068892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.068925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.069064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.069098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.069246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.069279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.069414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.069448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.069620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.069804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.069838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.070003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.070038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.070200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.070235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.070391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.070424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.070555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.070587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.070728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.070761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.070923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.070958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.071116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.071163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.071303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.071340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.071478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.071511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.071646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.071680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.071815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.071848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.072014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.072061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.072205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.072239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.072375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.072415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.072542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.072575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.072733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.001 [2024-07-15 08:04:26.072767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.001 qpair failed and we were unable to recover it. 00:37:35.001 [2024-07-15 08:04:26.072929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.072963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.073099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.073132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.073289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.073322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.073459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.073497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.073656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.073844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.073887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.074023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.074057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.074190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.074223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.074385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.074418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.074547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.074579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.074724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.074760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.074930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.074964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.075116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.075163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.075318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.075354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.075532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.075567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.075709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.075742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.075900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.075958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.076110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.076146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.076311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.076345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.076473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.076507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.076674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.076706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.076862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.076904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.077042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.077075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.077222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.077256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.077435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.077483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.077649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.077684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.077821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.077856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.078003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.078038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.078196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.078229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.078377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.078410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.078549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.078583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.078767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.078800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.078951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.078986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.079161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.079357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.079390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.002 [2024-07-15 08:04:26.079548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.002 [2024-07-15 08:04:26.079581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.002 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.079772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.079818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.079972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.080020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.080205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.080252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.080396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.080432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.080576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.080609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.080751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.080784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.080924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.080958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.081138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.081191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.081366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.081401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.081529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.081562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.081707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.081740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.081899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.081948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.082109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.082145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.082279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.082319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.082453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.082487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.082653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.082685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.082835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.082892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.083079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.083127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.083264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.083299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.083445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.083479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.083646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.083679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.083826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.083873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.084048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.084095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.084266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.084301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.084458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.084491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.084651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.084683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.084819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.084853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.085007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.085054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.085241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.085289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.085433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.085468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.085631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.085665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.085820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.085854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.003 [2024-07-15 08:04:26.086022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.003 [2024-07-15 08:04:26.086071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.003 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.086222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.086258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.086398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.086433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.086584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.086617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.086752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.086786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.086941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.086988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.087132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.087167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.087304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.087337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.087467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.087500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.087631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.087792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.087825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.087988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.088021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.088151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.088183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.088329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.088363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.088504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.088536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.088676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.088713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.088844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.088882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.089022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.089054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.089229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.089292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.089435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.089470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.089614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.089648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.089782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.089815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.089954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.089988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.090153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.090185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.090323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.090356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.090500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.090535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.090707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.090911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.090946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.091080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.091113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.091251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.091283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.091426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.091458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.091627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.091660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.091837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.091870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.092010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.092043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.092180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.092371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.092403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.092544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.092576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.092731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.092763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.092889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.004 [2024-07-15 08:04:26.092922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.004 qpair failed and we were unable to recover it. 00:37:35.004 [2024-07-15 08:04:26.093059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.093092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.093225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.093259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.093402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.093434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.093609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.093656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.093804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.093840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.094012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.094060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.094218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.094251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.094387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.094419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.094579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.094612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.094789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.094822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.094988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.095036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.095179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.095215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.095366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.095400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.095555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.095588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.095732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.095765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.095945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.095993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.096138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.096175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.096310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.096342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.096473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.096506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.096708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.096740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.096897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.096946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.097114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.097148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.097284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.097318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.097455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.097490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.097636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.097668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.097853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.097909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.098079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.098127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.098296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.098330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.098489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.098522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.098669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.098702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.098868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.098923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.099065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.099101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.099236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.099270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.099427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.099460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.099594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.099627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.099764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.099797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.099962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.099995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.100130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.005 [2024-07-15 08:04:26.100164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.005 qpair failed and we were unable to recover it. 00:37:35.005 [2024-07-15 08:04:26.100299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.100332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.100495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.100529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.100677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.100709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.100870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.100909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.101040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.101072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.101227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.101275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.101427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.101463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.101600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.101635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.101791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.101824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.101977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.102026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.102194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.102230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.102389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.102423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.102585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.102619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.102779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.102814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.102979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.103027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.103187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.103234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.103415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.103450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.103583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.103616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.103756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.103794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.103924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.103957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.104102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.104135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.104293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.104325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.104469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.104503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.104661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.104695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.104842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.104884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.105053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.105087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.105221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.105253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.105423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.105456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.105611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.105643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.105771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.105803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.105949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.105984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.106118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.106150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.106290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.106322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.106478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.106545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.106704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.106737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.106870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.106909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.107052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.107085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.107212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.006 [2024-07-15 08:04:26.107245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.006 qpair failed and we were unable to recover it. 00:37:35.006 [2024-07-15 08:04:26.107400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.107433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.107617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.107649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.107780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.107812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.107972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.108020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.108180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.108229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.108381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.108415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.108577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.108611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.108749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.108784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.108931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.108965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.109102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.109136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.109267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.109300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.109443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.109477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.109623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.109657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.109795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.109829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.109992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.110041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.110186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.110221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.110380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.110413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.110545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.110579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.110742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.110777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.110909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.110942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.111075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.111107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.111245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.111278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.111436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.111468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.111634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.111668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.111823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.111871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.112070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.112118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.112288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.112323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.112467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.112501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.112638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.112671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.112813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.112846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.112988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.113022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.113153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.113186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.007 qpair failed and we were unable to recover it. 00:37:35.007 [2024-07-15 08:04:26.113352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.007 [2024-07-15 08:04:26.113385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.113520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.113552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.113732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.113780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.113974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.114022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.114167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.114204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.114375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.114410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.114570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.114603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.114734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.114766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.114903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.114935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.115084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.115121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.115260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.115293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.115457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.115493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.115631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.115664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.115794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.115827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.115970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.116003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.116136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.116174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.116315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.116347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.116474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.116506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.116636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.116668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.116826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.116859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.117008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.117040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.117168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.117200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.117354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.117386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.117520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.117552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.117682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.117717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.117860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.117909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.118051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.118253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.118286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.118429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.118461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.118630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.118668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.118805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.118837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.119008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.119057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.119215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.119250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.119409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.119443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.119611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.119644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.119806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.119854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.120008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.120044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.120175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.008 [2024-07-15 08:04:26.120209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.008 qpair failed and we were unable to recover it. 00:37:35.008 [2024-07-15 08:04:26.120342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.120376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.120512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.120545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.120697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.120732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.120890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.120938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.121100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.121149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.121290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.121325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.121458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.121491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.121629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.121662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.121849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.121887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.122067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.122114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.122282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.122317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.122465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.122498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.122645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.122679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.122820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.122853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.122999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.123032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.123167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.123200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.123329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.123361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.123494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.123531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.123661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.123695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.123820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.123852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.124018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.124066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.124246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.124294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.124441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.124476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.124608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.124641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.124810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.124843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.125016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.125051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.125177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.125211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.125341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.125373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.125500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.125532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.125668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.125701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.125862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.125902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.126048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.126081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.126210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.126244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.126388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.126422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.126560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.126593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.126721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.126753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.126921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.126954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.009 [2024-07-15 08:04:26.127101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.009 [2024-07-15 08:04:26.127133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.009 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.127287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.127319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.127462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.127495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.127634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.127666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.127790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.127822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.127962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.127995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.128128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.128171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.128304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.128336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.128496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.128529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.128664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.128697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.128824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.128857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.129016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.129065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.129238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.129273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.129407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.129441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.129598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.129631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.129761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.129794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.129936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.129972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.130110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.130143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.130273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.130306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.130450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.130482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.130642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.130679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.130815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.130847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.131008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.131056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.131201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.131238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.131398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.131432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.131601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.131634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.131787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.131822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.131980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.132029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.132169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.132203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.132335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.132367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.132499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.132531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.132689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.132721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.132856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.133028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.133060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.133242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.133290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.133433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.133469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.133601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.133635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.133798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.133832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.010 [2024-07-15 08:04:26.133977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.010 [2024-07-15 08:04:26.134012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.010 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.134159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.134193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.134334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.134368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.134526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.134559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.134705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.134738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.134894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.134927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.135056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.135088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.135246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.135279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.135413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.135445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.135630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.135662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.135791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.135823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.135967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.136003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.136141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.136176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.136308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.136342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.136502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.136536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.136666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.136699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.136846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.136884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.137023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.137056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.137236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.137268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.137452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.137485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.137643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.137676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.137805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.137838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.138044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.138097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.138243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.138289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.138440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.138475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.138608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.138641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.138783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.138817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.138965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.138999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.139143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.139178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.139336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.139380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.139522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.139556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.139734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.139768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.139925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.139973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.140149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.140198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.140352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.140398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.140542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.140575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.140770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.140804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.140947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.140983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.011 [2024-07-15 08:04:26.141122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.011 [2024-07-15 08:04:26.141157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.011 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.141315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.141348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.141493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.141539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.141680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.141714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.141844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.141883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.142060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.142093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.142263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.142297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.142478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.142511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.142647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.142680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.142810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.142844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.143014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.143048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.143190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.143224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.143353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.143386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.143535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.143583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.143728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.143763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.143923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.143969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.144113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.144148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.144307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.144341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.144496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.144529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.144685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.144719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.144863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.144903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.145044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.145077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.145217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.145251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.145446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.145479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.145630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.145670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.145808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.145842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.145998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.146047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.146196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.146231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.146373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.146406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.146575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.146610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.146750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.146785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.146936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.146971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.147130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.012 [2024-07-15 08:04:26.147163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.012 qpair failed and we were unable to recover it. 00:37:35.012 [2024-07-15 08:04:26.147299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.147332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.147491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.147525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.147674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.147721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.147892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.147928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.148085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.148132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.148314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.148349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.148488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.148522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.148671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.148705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.148838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.148872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.149058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.149106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.149298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.149334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.149469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.149503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.149639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.149673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.149856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.149896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.150069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.150103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.150261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.150294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.150477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.150511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.150644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.150677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.150813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.150846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.151014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.151061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.151211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.151246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.151382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.151416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.151543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.151576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.151709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.151741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.151888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.151923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.152073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.152107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.152268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.152301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.152438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.152470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.152605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.152637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.152791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.152824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.152983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.153016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.153157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.153194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.153381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.153414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.153540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.153572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.153698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.153731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.153857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.153897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2a00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.154046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.154095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.154246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.013 [2024-07-15 08:04:26.154282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.013 qpair failed and we were unable to recover it. 00:37:35.013 [2024-07-15 08:04:26.154449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.154484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.154645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.154678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.154813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.154847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.155000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.155034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.155194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.155242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.155388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.155423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.155563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.155596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.155743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.155776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.155918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.155952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.156091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.156125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.156281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.156314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.156474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.156507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.156639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.156672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.156861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.156906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.157059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.157093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.157263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.157295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.157431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.157464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.157604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.157636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.157778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.157811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.157953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.157988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.158138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.158171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.158332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.158364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.158529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.158561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.158695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.158728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.158885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.158933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.159086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.159122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.159280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.159314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.159445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.159479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.159614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.159648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.159775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.159809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.159978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.160013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.160181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.160216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.160355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.160390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.160533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.160571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.160729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.160763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.160907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.161080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.161113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.014 qpair failed and we were unable to recover it. 00:37:35.014 [2024-07-15 08:04:26.161356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.014 [2024-07-15 08:04:26.161389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.161525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.161559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.161720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.161753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.161887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.161921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.162093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.162125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.162254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.162286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.162414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.162446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.162584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.162617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.162752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.162785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.162918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.162952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.163093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.163126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.163288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.163321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.163453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.163487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.163623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.163656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.163819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.163852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.164013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.164062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.164206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.164243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.164376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.164411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.164580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.164613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.164768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.164809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.164973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.165008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.165148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.165181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.165341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.165374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.165540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.165574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.165736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.165769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.165909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.165943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.166084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.166117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.166247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.166279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.166438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.166472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.166603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.166636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.166775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.166809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.166959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.166995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.167133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.167168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.167313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.167347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.167480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.167513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.167643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.167677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.167816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.167854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.168008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.168041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.168174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.168208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.168379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.015 [2024-07-15 08:04:26.168412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.015 qpair failed and we were unable to recover it. 00:37:35.015 [2024-07-15 08:04:26.168551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.168584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.168739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.168773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.168912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.168946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.169082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.169115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.169295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.169329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.169489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.169535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.169698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.169731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.169892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.169927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.170055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.170088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.170246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.170279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.170425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.170458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.170590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.170624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.170756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.170789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.170976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.171010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.171150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.171183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.171335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.171371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.171525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.171559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.171696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.171729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.171874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.171913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.172081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.172115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.172302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.172336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.172465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.172518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.172679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.172712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.172861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.172903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.173053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.173086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.173216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.173250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.173390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.173423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.173592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.173626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.173796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.173829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.173963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.173997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.174130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.174163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.174303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.174336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.174504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.174537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.174672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.174705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.174838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.016 [2024-07-15 08:04:26.174870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.016 qpair failed and we were unable to recover it. 00:37:35.016 [2024-07-15 08:04:26.175045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.017 [2024-07-15 08:04:26.175081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.017 qpair failed and we were unable to recover it. 00:37:35.017 [2024-07-15 08:04:26.175220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.017 [2024-07-15 08:04:26.175258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.017 qpair failed and we were unable to recover it. 00:37:35.017 [2024-07-15 08:04:26.175430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.017 [2024-07-15 08:04:26.175463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.017 qpair failed and we were unable to recover it. 00:37:35.017 [2024-07-15 08:04:26.175600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.017 [2024-07-15 08:04:26.175633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.017 qpair failed and we were unable to recover it. 00:37:35.017 [2024-07-15 08:04:26.175764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.017 [2024-07-15 08:04:26.175798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.017 qpair failed and we were unable to recover it. 00:37:35.279 [2024-07-15 08:04:26.175936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-07-15 08:04:26.175970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-07-15 08:04:26.176127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-07-15 08:04:26.176160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.176304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.176337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.176477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.176511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.176646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.176679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.176816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.176850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.177022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.177055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.177191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.177224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.177364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.177397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.177583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.177615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.177762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.177795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.177931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.177965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.178120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.178153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.178287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.178321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.178456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.178489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.178647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.178679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.178919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.178953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.179124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.179166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.179307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.179340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.179473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.179507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.179668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.179700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.179837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.179871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.180018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.180052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.180230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.180280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.180456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.180491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.180651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.180686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.180853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.180894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.181031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.181063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.181204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.181237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.181408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.181441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.181601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.181634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.181765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.181798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.181937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.181971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.182134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.182167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.182324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.182357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.182516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.182550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.182682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.182722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.182896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.182930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.183058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.183091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.183249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.183282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.183436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.183468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.183627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.183660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.183829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-07-15 08:04:26.183862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-07-15 08:04:26.184010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.184044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.184171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.184203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.184339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.184374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.184519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.184553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.184712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.184744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.184890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.184923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.185061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.185238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.185272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.185436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.185470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.185611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.185646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.185801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.185834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.185988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.186021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.186180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.186213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.186383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.186417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.186549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.186582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.186721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.186753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.186890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.186924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.187055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.187087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.187221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.187253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.187423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.187457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.187597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.187630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.187773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.187807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.187967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.188012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.188150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.188183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.188345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.188378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.188509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.188541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.188681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.188714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.188857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.188897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.189041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.189074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.189242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.189274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.189405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.189439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.189597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.189630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.189784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.189823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.189976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.190015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.190205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.190238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.190374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.190407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.190549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.190582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.190721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.190754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.190892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.190926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.191068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.191100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-07-15 08:04:26.191269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-07-15 08:04:26.191303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.191460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.191494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.191639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.191672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.191807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.191839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.192040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.192074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.192236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.192270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.192411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.192445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.192590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.192623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.192766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.192971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.193005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.193178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.193210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.193374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.193407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.193550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.193584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.193710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.193742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.193897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.193932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.194062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.194094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.194261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.194293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.194430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.194463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.194634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.194667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.194835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.194867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.195032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.195080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.195225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.195262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.195433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.195467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.195605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.195638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.195794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.195827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.195968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.196003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.196163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.196196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.196324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.196357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.196517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.196549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.196708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.196741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.196905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.196939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.197126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.197159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.197294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.197327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.197461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.197499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.197665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.197697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.197824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.197857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.198026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.198077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.198253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.198290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.198436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.198469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.198628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.198662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.198832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-07-15 08:04:26.198874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-07-15 08:04:26.199017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.199050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.199210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.199242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.199374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.199406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.199563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.199596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.199783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.199816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.199951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.199984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.200151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.200183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.200318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.200352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.200498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.200531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.200688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.200720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.200850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.200898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.201063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.201097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.201257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.201289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.201424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.201456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.201590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.201624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.201758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.201790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.201933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.201966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.202153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.202187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.202316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.202350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.202502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.202550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.202695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.202730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.202896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.202933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.203071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.203105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.203235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.203268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.203401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.203434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.203563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.203596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.203733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.203765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.203915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.203950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.204124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.204158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.204317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.204361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.204534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.204567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.204725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.204757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.204890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.204923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.205087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.205119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.205289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.205321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.205459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.205494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.205642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.205675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.205803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.205837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.205984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.206017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.206175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.206208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-07-15 08:04:26.206386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-07-15 08:04:26.206419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.206582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.206615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.206743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.206776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.206912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.206946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.207101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.207134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.207283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.207316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.207451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.207483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.207618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.207651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.207801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.207833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.207972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.208005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.208140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.208173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.208327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.208359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.208518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.208550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.208720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.208753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.208914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.208948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.209113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.209146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.209304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.209336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.209503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.209664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.209697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.209856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.209901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.210037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.210070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.210224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.210257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.210393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.210426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.210583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.210616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.210775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.210808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.210950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.210984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.211131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.211163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.211348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.211381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.211538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.211571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.211706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.211740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.211871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.211917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.212047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.212079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.212239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-07-15 08:04:26.212273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-07-15 08:04:26.212446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.212480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.212612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.212645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.212777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.212809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.212951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.212985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.213167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.213199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.213366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.213399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.213530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.213562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.213722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.213755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.213888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.213921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.214067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.214099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.214283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.214316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.214474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.214509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.214666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.214698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.214836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.214868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.215058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.215106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.215259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.215295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.215438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.215482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.215619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.215652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.215780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.215813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.215993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.216165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.216199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.216345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.216379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.216508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.216540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.216676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.216708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.216864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.216905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.217063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.217096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.217239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.217280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.217442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.217475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.217601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.217634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.217766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.217799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.217967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.218017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.218202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.218241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.218369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.218402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.218575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.218608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.218741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.218775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.218917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.218950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.219082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.219115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.219253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.219286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.219450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.219485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-07-15 08:04:26.219629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-07-15 08:04:26.219662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.219804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.219837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.219980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.220014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.220159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.220194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.220329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.220362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.220492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.220525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.220656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.220689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.220821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.220853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.221013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.221046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.221185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.221219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.221348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.221381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.221531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.221564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.221710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.221744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.221885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.221919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.222081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.222114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.222239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.222272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.222434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.222466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.222596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.222628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.222762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.222796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.222929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.222966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.223167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.223201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.223338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.223383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.223514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.223548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.223690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.223723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.223864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.223914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.224077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.224110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.224243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.224275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.224415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.224453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.224592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.224624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.224782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.224815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.224963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.224997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.225155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.225187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.225322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.225355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.225497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.225530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.225672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.225705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.225835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.225867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.226018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.226051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.226176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.226209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.226347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.226379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.226514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.226548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.226708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-07-15 08:04:26.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-07-15 08:04:26.226889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.226927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.227089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.227122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.227256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.227289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.227455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.227488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.227613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.227645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.227779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.227812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.227988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.228023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.228195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.228228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.228368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.228401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.228532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.228565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.228718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.228752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.228898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.228932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.229091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.229125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.229270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.229303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.229491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.229525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.229662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.229694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.229839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.229872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.230047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.230080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.230254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.230287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.230449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.230482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.230630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.230664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.230796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.230829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.231001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.231035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.231197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.231230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.231377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.231410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.231573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.231607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.231737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.231775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.231937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.231971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.232137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.232170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.232309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.232343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.232520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.232553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.232687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.232720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.232863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.232903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.233037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.233070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.233226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.233259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.233393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.233427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.233583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.233616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 A controller has encountered a failure and is being reset. 00:37:35.287 [2024-07-15 08:04:26.233813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.233848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.233992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.234025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.234177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-07-15 08:04:26.234215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-07-15 08:04:26.234379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.234412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.234547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.234580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.234719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.234751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.234922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.234956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.235093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.235126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.235259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.235292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.235452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.235486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.235647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.235679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.235812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.235844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.235991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.236024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.236170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.236202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.236363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.236396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.236528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.236561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.236728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.236760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.236946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.236981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.237121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.237165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.237325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.237357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.237485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.237517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.237657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.237690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.237829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.237862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.238024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.238074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-07-15 08:04:26.238307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-07-15 08:04:26.238362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:35.288 [2024-07-15 08:04:26.238390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:35.288 [2024-07-15 08:04:26.238431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:35.288 [2024-07-15 08:04:26.238460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.288 [2024-07-15 08:04:26.238489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.288 [2024-07-15 08:04:26.238514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.288 Unable to reset the controller. 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.288 Malloc0 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.288 [2024-07-15 08:04:26.354070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.288 [2024-07-15 08:04:26.383769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.288 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.289 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:35.289 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.289 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.289 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.289 08:04:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1252866 00:37:36.222 Controller properly reset. 00:37:40.412 Initializing NVMe Controllers 00:37:40.412 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:40.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:40.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:40.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:40.412 Initialization complete. Launching workers. 00:37:40.412 Starting thread on core 1 00:37:40.412 Starting thread on core 2 00:37:40.412 Starting thread on core 3 00:37:40.412 Starting thread on core 0 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:40.412 00:37:40.412 real 0m11.488s 00:37:40.412 user 0m34.022s 00:37:40.412 sys 0m7.593s 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:40.412 ************************************ 00:37:40.412 END TEST nvmf_target_disconnect_tc2 00:37:40.412 ************************************ 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:40.412 rmmod nvme_tcp 00:37:40.412 rmmod nvme_fabrics 00:37:40.412 rmmod nvme_keyring 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1253329 ']' 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1253329 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1253329 ']' 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1253329 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253329 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253329' 00:37:40.412 killing process with pid 1253329 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1253329 00:37:40.412 08:04:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1253329 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:41.786 08:04:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:44.324 08:04:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:44.324 00:37:44.324 real 0m17.427s 00:37:44.324 user 1m1.543s 00:37:44.324 sys 0m10.221s 00:37:44.324 08:04:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:44.324 08:04:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:44.324 ************************************ 00:37:44.324 END TEST nvmf_target_disconnect 00:37:44.324 ************************************ 00:37:44.324 08:04:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:44.324 08:04:35 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:44.324 08:04:35 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:44.324 08:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.324 08:04:35 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:44.324 00:37:44.324 real 28m57.054s 00:37:44.324 user 78m16.597s 00:37:44.324 sys 6m1.194s 00:37:44.324 08:04:35 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:44.324 08:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.324 ************************************ 00:37:44.324 END TEST nvmf_tcp 00:37:44.324 ************************************ 00:37:44.324 08:04:35 -- common/autotest_common.sh@1142 -- # return 0 00:37:44.324 08:04:35 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:44.324 08:04:35 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:44.324 08:04:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:44.324 08:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:44.324 08:04:35 -- common/autotest_common.sh@10 -- # set +x 00:37:44.324 ************************************ 00:37:44.324 START TEST spdkcli_nvmf_tcp 00:37:44.324 ************************************ 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:44.324 * Looking for test storage... 00:37:44.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1254656 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1254656 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1254656 ']' 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:44.324 08:04:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.324 [2024-07-15 08:04:35.283215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:44.324 [2024-07-15 08:04:35.283376] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254656 ] 00:37:44.324 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.324 [2024-07-15 08:04:35.415365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:44.581 [2024-07-15 08:04:35.671860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.581 [2024-07-15 08:04:35.671867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.147 08:04:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:45.147 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:45.147 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:45.147 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:45.147 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:45.147 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:45.147 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:45.147 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:45.147 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:45.147 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:45.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:45.147 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:45.147 ' 00:37:48.444 [2024-07-15 08:04:38.966630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:49.025 [2024-07-15 08:04:40.208020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:51.608 [2024-07-15 08:04:42.491121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:53.521 [2024-07-15 08:04:44.453566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:54.899 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:54.899 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:54.899 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:54.899 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:54.899 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:54.899 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:54.899 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:54.899 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:54.899 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:54.899 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:54.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:54.900 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:54.900 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:54.900 08:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:55.468 08:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:55.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:55.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:55.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:55.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:55.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:55.468 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:55.468 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:55.468 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:55.468 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:55.468 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:55.468 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:55.468 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:55.468 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:55.468 ' 00:38:02.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:02.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:02.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:02.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:02.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:02.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:02.033 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:02.033 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:02.033 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:02.033 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:02.033 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:02.033 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:02.033 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:02.033 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1254656 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1254656 ']' 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1254656 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254656 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254656' 00:38:02.033 killing process with pid 1254656 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1254656 00:38:02.033 08:04:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1254656 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1254656 ']' 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1254656 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1254656 ']' 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1254656 00:38:02.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1254656) - No such process 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1254656 is not found' 00:38:02.290 Process with pid 1254656 is not found 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:02.290 08:04:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:02.548 00:38:02.548 real 0m18.398s 00:38:02.548 user 0m37.924s 00:38:02.548 sys 0m1.047s 00:38:02.548 08:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:02.548 08:04:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:02.548 ************************************ 00:38:02.548 END TEST spdkcli_nvmf_tcp 00:38:02.548 ************************************ 00:38:02.548 08:04:53 -- common/autotest_common.sh@1142 -- # return 0 00:38:02.548 08:04:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:02.548 08:04:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:02.548 08:04:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:02.548 08:04:53 -- common/autotest_common.sh@10 -- # set +x 00:38:02.548 ************************************ 00:38:02.548 START TEST nvmf_identify_passthru 00:38:02.548 ************************************ 00:38:02.548 08:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:02.548 * Looking for test storage... 00:38:02.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:02.548 08:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.548 08:04:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.548 08:04:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.548 08:04:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:02.548 08:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.548 08:04:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.548 08:04:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.548 08:04:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:02.548 08:04:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.548 08:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.548 08:04:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:02.548 08:04:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:02.548 08:04:53 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:38:02.548 08:04:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:04.451 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:04.451 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:04.451 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:04.451 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:04.451 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:04.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:38:04.710 00:38:04.710 --- 10.0.0.2 ping statistics --- 00:38:04.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.710 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:04.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:38:04.710 00:38:04.710 --- 10.0.0.1 ping statistics --- 00:38:04.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.710 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:04.710 08:04:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:38:04.710 08:04:55 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:04.710 08:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:04.710 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.986 08:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:38:09.986 08:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:38:09.986 08:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:09.986 08:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:09.986 EAL: No free 2048 kB hugepages reported on node 1 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1259463 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:14.179 08:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1259463 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1259463 ']' 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:14.179 08:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.179 [2024-07-15 08:05:04.658006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:14.179 [2024-07-15 08:05:04.658154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.179 EAL: No free 2048 kB hugepages reported on node 1 00:38:14.179 [2024-07-15 08:05:04.798884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:14.179 [2024-07-15 08:05:05.057149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.179 [2024-07-15 08:05:05.057258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.179 [2024-07-15 08:05:05.057287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.179 [2024-07-15 08:05:05.057308] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.179 [2024-07-15 08:05:05.057329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.179 [2024-07-15 08:05:05.057455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.179 [2024-07-15 08:05:05.057530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:14.179 [2024-07-15 08:05:05.057606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.179 [2024-07-15 08:05:05.057616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:38:14.439 08:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.439 INFO: Log level set to 20 00:38:14.439 INFO: Requests: 00:38:14.439 { 00:38:14.439 "jsonrpc": "2.0", 00:38:14.439 "method": "nvmf_set_config", 00:38:14.439 "id": 1, 00:38:14.439 "params": { 00:38:14.439 "admin_cmd_passthru": { 00:38:14.439 "identify_ctrlr": true 00:38:14.439 } 00:38:14.439 } 00:38:14.439 } 00:38:14.439 00:38:14.439 INFO: response: 00:38:14.439 { 00:38:14.439 "jsonrpc": "2.0", 00:38:14.439 "id": 1, 00:38:14.439 "result": true 00:38:14.439 } 00:38:14.439 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.439 08:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.439 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.439 INFO: Setting log level to 20 00:38:14.439 INFO: Setting log level to 20 00:38:14.439 INFO: Log level set to 20 00:38:14.439 INFO: Log level set to 20 00:38:14.439 INFO: Requests: 00:38:14.439 { 00:38:14.439 "jsonrpc": "2.0", 00:38:14.439 "method": "framework_start_init", 00:38:14.439 "id": 1 00:38:14.439 } 00:38:14.439 00:38:14.439 INFO: Requests: 00:38:14.439 { 00:38:14.439 "jsonrpc": "2.0", 00:38:14.439 "method": "framework_start_init", 00:38:14.439 "id": 1 00:38:14.439 } 00:38:14.439 00:38:14.698 [2024-07-15 08:05:05.898802] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:14.698 INFO: response: 00:38:14.698 { 00:38:14.698 "jsonrpc": "2.0", 00:38:14.698 "id": 1, 00:38:14.698 "result": true 00:38:14.698 } 00:38:14.698 00:38:14.698 INFO: response: 00:38:14.698 { 00:38:14.698 "jsonrpc": "2.0", 00:38:14.698 "id": 1, 00:38:14.698 "result": true 00:38:14.698 } 00:38:14.698 00:38:14.699 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.699 08:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:14.699 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.699 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.699 INFO: Setting log level to 40 00:38:14.699 INFO: Setting log level to 40 00:38:14.699 INFO: Setting log level to 40 00:38:14.699 [2024-07-15 08:05:05.911784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.699 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:14.699 08:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:14.699 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:14.699 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.957 08:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:38:14.957 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.957 08:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:18.266 Nvme0n1 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.266 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.266 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.266 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:18.266 [2024-07-15 08:05:08.861406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.266 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.266 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:18.267 [ 00:38:18.267 { 00:38:18.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:18.267 "subtype": "Discovery", 00:38:18.267 "listen_addresses": [], 00:38:18.267 "allow_any_host": true, 00:38:18.267 "hosts": [] 00:38:18.267 }, 00:38:18.267 { 00:38:18.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:18.267 "subtype": "NVMe", 00:38:18.267 "listen_addresses": [ 00:38:18.267 { 00:38:18.267 "trtype": "TCP", 00:38:18.267 "adrfam": "IPv4", 00:38:18.267 "traddr": "10.0.0.2", 00:38:18.267 "trsvcid": "4420" 00:38:18.267 } 00:38:18.267 ], 00:38:18.267 "allow_any_host": true, 00:38:18.267 "hosts": [], 00:38:18.267 "serial_number": "SPDK00000000000001", 00:38:18.267 "model_number": "SPDK bdev Controller", 00:38:18.267 "max_namespaces": 1, 00:38:18.267 "min_cntlid": 1, 00:38:18.267 "max_cntlid": 65519, 00:38:18.267 "namespaces": [ 00:38:18.267 { 00:38:18.267 "nsid": 1, 00:38:18.267 "bdev_name": "Nvme0n1", 00:38:18.267 "name": "Nvme0n1", 00:38:18.267 "nguid": "47932BEFBDC4404F8C846AB3F0FD1F87", 00:38:18.267 "uuid": "47932bef-bdc4-404f-8c84-6ab3f0fd1f87" 00:38:18.267 } 00:38:18.267 ] 00:38:18.267 } 00:38:18.267 ] 00:38:18.267 08:05:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.267 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:18.267 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:18.267 08:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:18.267 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:18.267 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:18.267 08:05:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:18.267 rmmod nvme_tcp 00:38:18.267 rmmod nvme_fabrics 00:38:18.267 rmmod nvme_keyring 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1259463 ']' 00:38:18.267 08:05:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1259463 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1259463 ']' 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1259463 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:18.267 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259463 00:38:18.526 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:18.527 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:18.527 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259463' 00:38:18.527 killing process with pid 1259463 00:38:18.527 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1259463 00:38:18.527 08:05:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1259463 00:38:21.056 08:05:12 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:21.056 08:05:12 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:21.056 08:05:12 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:21.056 08:05:12 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:21.056 08:05:12 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:21.056 08:05:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.056 08:05:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:21.056 08:05:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.957 08:05:14 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:22.957 00:38:22.957 real 0m20.595s 00:38:22.957 user 0m33.484s 00:38:22.957 sys 0m2.677s 00:38:22.957 08:05:14 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:22.957 08:05:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:22.957 ************************************ 00:38:22.957 END TEST nvmf_identify_passthru 00:38:22.957 ************************************ 00:38:23.216 08:05:14 -- common/autotest_common.sh@1142 -- # return 0 00:38:23.216 08:05:14 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:23.216 08:05:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:23.216 08:05:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:23.216 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:38:23.216 ************************************ 00:38:23.216 START TEST nvmf_dif 00:38:23.216 ************************************ 00:38:23.216 08:05:14 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:23.216 * Looking for test storage... 00:38:23.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:23.216 08:05:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.216 08:05:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:23.216 08:05:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.216 08:05:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.216 08:05:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.216 08:05:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.216 08:05:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.217 08:05:14 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.217 08:05:14 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.217 08:05:14 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.217 08:05:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.217 08:05:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.217 08:05:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.217 08:05:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:23.217 08:05:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:23.217 08:05:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:23.217 08:05:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:23.217 08:05:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:23.217 08:05:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:23.217 08:05:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.217 08:05:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:23.217 08:05:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:23.217 08:05:14 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:23.217 08:05:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:25.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:25.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:25.119 08:05:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:25.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:25.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:25.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:38:25.120 00:38:25.120 --- 10.0.0.2 ping statistics --- 00:38:25.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.120 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:25.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:38:25.120 00:38:25.120 --- 10.0.0.1 ping statistics --- 00:38:25.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.120 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:25.120 08:05:16 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:26.494 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:26.494 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:26.494 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:26.494 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:26.494 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:26.494 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:26.494 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:26.494 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:26.494 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:26.494 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:26.494 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:26.494 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:26.494 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:26.494 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:26.494 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:26.494 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:26.494 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:26.494 08:05:17 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.494 08:05:17 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:26.494 08:05:17 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:26.494 08:05:17 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.494 08:05:17 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:26.494 08:05:17 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:26.494 08:05:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:26.494 08:05:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:26.495 08:05:17 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.495 08:05:17 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1262935 00:38:26.495 08:05:17 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:26.495 08:05:17 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1262935 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1262935 ']' 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:26.495 08:05:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.495 [2024-07-15 08:05:17.630241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:26.495 [2024-07-15 08:05:17.630386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.495 EAL: No free 2048 kB hugepages reported on node 1 00:38:26.753 [2024-07-15 08:05:17.769642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.011 [2024-07-15 08:05:17.992717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:27.011 [2024-07-15 08:05:17.992782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:27.011 [2024-07-15 08:05:17.992820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:27.011 [2024-07-15 08:05:17.992841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:27.011 [2024-07-15 08:05:17.992873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:27.011 [2024-07-15 08:05:17.992944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:38:27.575 08:05:18 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:27.575 08:05:18 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:27.575 08:05:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:27.575 08:05:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:27.575 [2024-07-15 08:05:18.620625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:27.575 08:05:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:27.575 08:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:27.575 ************************************ 00:38:27.575 START TEST fio_dif_1_default 00:38:27.575 ************************************ 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:27.575 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:27.575 bdev_null0 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:27.576 [2024-07-15 08:05:18.681032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:27.576 { 00:38:27.576 "params": { 00:38:27.576 "name": "Nvme$subsystem", 00:38:27.576 "trtype": "$TEST_TRANSPORT", 00:38:27.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:27.576 "adrfam": "ipv4", 00:38:27.576 "trsvcid": "$NVMF_PORT", 00:38:27.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:27.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:27.576 "hdgst": ${hdgst:-false}, 00:38:27.576 "ddgst": ${ddgst:-false} 00:38:27.576 }, 00:38:27.576 "method": "bdev_nvme_attach_controller" 00:38:27.576 } 00:38:27.576 EOF 00:38:27.576 )") 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:27.576 "params": { 00:38:27.576 "name": "Nvme0", 00:38:27.576 "trtype": "tcp", 00:38:27.576 "traddr": "10.0.0.2", 00:38:27.576 "adrfam": "ipv4", 00:38:27.576 "trsvcid": "4420", 00:38:27.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.576 "hdgst": false, 00:38:27.576 "ddgst": false 00:38:27.576 }, 00:38:27.576 "method": "bdev_nvme_attach_controller" 00:38:27.576 }' 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:27.576 08:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.835 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:27.835 fio-3.35 00:38:27.835 Starting 1 thread 00:38:27.835 EAL: No free 2048 kB hugepages reported on node 1 00:38:40.036 00:38:40.036 filename0: (groupid=0, jobs=1): err= 0: pid=1263286: Mon Jul 15 08:05:29 2024 00:38:40.036 read: IOPS=186, BW=745KiB/s (762kB/s)(7472KiB/10036msec) 00:38:40.036 slat (nsec): min=5922, max=79240, avg=15644.40, stdev=6104.84 00:38:40.036 clat (usec): min=843, max=44508, avg=21443.80, stdev=20434.09 00:38:40.036 lat (usec): min=855, max=44530, avg=21459.44, stdev=20434.33 00:38:40.036 clat percentiles (usec): 00:38:40.036 | 1.00th=[ 873], 5.00th=[ 898], 10.00th=[ 930], 20.00th=[ 963], 00:38:40.036 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[41157], 60.00th=[41681], 00:38:40.036 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:38:40.036 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:38:40.036 | 99.99th=[44303] 00:38:40.036 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=745.60, stdev=31.32, samples=20 00:38:40.036 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:38:40.036 lat (usec) : 1000=34.74% 00:38:40.036 lat (msec) : 2=15.15%, 50=50.11% 00:38:40.036 cpu : usr=91.11%, sys=8.41%, ctx=13, majf=0, minf=1637 00:38:40.036 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:40.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.036 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.036 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:40.036 00:38:40.036 Run status group 0 (all jobs): 00:38:40.036 READ: bw=745KiB/s (762kB/s), 745KiB/s-745KiB/s (762kB/s-762kB/s), io=7472KiB (7651kB), run=10036-10036msec 00:38:40.036 ----------------------------------------------------- 00:38:40.036 Suppressions used: 00:38:40.036 count bytes template 00:38:40.036 1 8 /usr/src/fio/parse.c 00:38:40.036 1 8 libtcmalloc_minimal.so 00:38:40.036 1 904 libcrypto.so 00:38:40.036 ----------------------------------------------------- 00:38:40.036 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.036 00:38:40.036 real 0m12.225s 00:38:40.036 user 0m11.211s 00:38:40.036 sys 0m1.262s 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:40.036 ************************************ 00:38:40.036 END TEST fio_dif_1_default 00:38:40.036 ************************************ 00:38:40.036 08:05:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:40.036 08:05:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:40.036 08:05:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:40.036 08:05:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:40.036 08:05:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:40.036 ************************************ 00:38:40.036 START TEST fio_dif_1_multi_subsystems 00:38:40.036 ************************************ 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:40.036 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 bdev_null0 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 [2024-07-15 08:05:30.952388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 bdev_null1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:40.037 { 00:38:40.037 "params": { 00:38:40.037 "name": "Nvme$subsystem", 00:38:40.037 "trtype": "$TEST_TRANSPORT", 00:38:40.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:40.037 "adrfam": "ipv4", 00:38:40.037 "trsvcid": "$NVMF_PORT", 00:38:40.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:40.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:40.037 "hdgst": ${hdgst:-false}, 00:38:40.037 "ddgst": ${ddgst:-false} 00:38:40.037 }, 00:38:40.037 "method": "bdev_nvme_attach_controller" 00:38:40.037 } 00:38:40.037 EOF 00:38:40.037 )") 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:40.037 { 00:38:40.037 "params": { 00:38:40.037 "name": "Nvme$subsystem", 00:38:40.037 "trtype": "$TEST_TRANSPORT", 00:38:40.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:40.037 "adrfam": "ipv4", 00:38:40.037 "trsvcid": "$NVMF_PORT", 00:38:40.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:40.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:40.037 "hdgst": ${hdgst:-false}, 00:38:40.037 "ddgst": ${ddgst:-false} 00:38:40.037 }, 00:38:40.037 "method": "bdev_nvme_attach_controller" 00:38:40.037 } 00:38:40.037 EOF 00:38:40.037 )") 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:40.037 08:05:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:40.037 "params": { 00:38:40.037 "name": "Nvme0", 00:38:40.037 "trtype": "tcp", 00:38:40.037 "traddr": "10.0.0.2", 00:38:40.037 "adrfam": "ipv4", 00:38:40.037 "trsvcid": "4420", 00:38:40.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.037 "hdgst": false, 00:38:40.037 "ddgst": false 00:38:40.037 }, 00:38:40.037 "method": "bdev_nvme_attach_controller" 00:38:40.037 },{ 00:38:40.037 "params": { 00:38:40.037 "name": "Nvme1", 00:38:40.037 "trtype": "tcp", 00:38:40.037 "traddr": "10.0.0.2", 00:38:40.037 "adrfam": "ipv4", 00:38:40.037 "trsvcid": "4420", 00:38:40.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:40.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:40.037 "hdgst": false, 00:38:40.037 "ddgst": false 00:38:40.037 }, 00:38:40.037 "method": "bdev_nvme_attach_controller" 00:38:40.037 }' 00:38:40.037 08:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:40.037 08:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:40.037 08:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:38:40.037 08:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:40.037 08:05:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:40.332 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:40.332 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:40.332 fio-3.35 00:38:40.332 Starting 2 threads 00:38:40.332 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.545 00:38:52.545 filename0: (groupid=0, jobs=1): err= 0: pid=1264815: Mon Jul 15 08:05:42 2024 00:38:52.545 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10005msec) 00:38:52.545 slat (nsec): min=5208, max=66754, avg=16171.07, stdev=7311.75 00:38:52.545 clat (usec): min=40889, max=44166, avg=41635.41, stdev=496.39 00:38:52.545 lat (usec): min=40899, max=44202, avg=41651.59, stdev=496.78 00:38:52.545 clat percentiles (usec): 00:38:52.545 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:52.545 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:38:52.545 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:52.545 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:38:52.545 | 99.99th=[44303] 00:38:52.545 bw ( KiB/s): min= 352, max= 384, per=33.79%, avg=382.40, stdev= 7.16, samples=20 00:38:52.545 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:38:52.545 lat (msec) : 50=100.00% 00:38:52.545 cpu : usr=94.09%, sys=5.42%, ctx=14, majf=0, minf=1637 00:38:52.545 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.545 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.545 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:52.545 filename1: (groupid=0, jobs=1): err= 0: pid=1264816: Mon Jul 15 08:05:42 2024 00:38:52.545 read: IOPS=186, BW=747KiB/s (765kB/s)(7472KiB/10001msec) 00:38:52.545 slat (nsec): min=5526, max=77734, avg=14508.32, stdev=6137.52 00:38:52.545 clat (usec): min=849, max=45039, avg=21370.15, stdev=20366.46 00:38:52.545 lat (usec): min=859, max=45053, avg=21384.65, stdev=20365.06 00:38:52.545 clat percentiles (usec): 00:38:52.545 | 1.00th=[ 865], 5.00th=[ 889], 10.00th=[ 898], 20.00th=[ 930], 00:38:52.545 | 30.00th=[ 963], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41157], 00:38:52.545 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:38:52.545 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:38:52.545 | 99.99th=[44827] 00:38:52.545 bw ( KiB/s): min= 704, max= 768, per=65.89%, avg=745.75, stdev=31.11, samples=20 00:38:52.545 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:38:52.545 lat (usec) : 1000=32.49% 00:38:52.545 lat (msec) : 2=17.40%, 50=50.11% 00:38:52.545 cpu : usr=93.41%, sys=6.10%, ctx=18, majf=0, minf=1637 00:38:52.545 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.545 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.545 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:52.545 00:38:52.545 Run status group 0 (all jobs): 00:38:52.545 READ: bw=1131KiB/s (1158kB/s), 384KiB/s-747KiB/s (393kB/s-765kB/s), io=11.0MiB (11.6MB), run=10001-10005msec 00:38:52.545 ----------------------------------------------------- 00:38:52.545 Suppressions used: 00:38:52.545 count bytes template 00:38:52.545 2 16 /usr/src/fio/parse.c 00:38:52.545 1 8 libtcmalloc_minimal.so 00:38:52.545 1 904 libcrypto.so 00:38:52.545 ----------------------------------------------------- 00:38:52.545 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 00:38:52.545 real 0m12.432s 00:38:52.545 user 0m21.140s 00:38:52.545 sys 0m1.600s 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 ************************************ 00:38:52.545 END TEST fio_dif_1_multi_subsystems 00:38:52.545 ************************************ 00:38:52.545 08:05:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:52.545 08:05:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:52.545 08:05:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:52.545 08:05:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 ************************************ 00:38:52.545 START TEST fio_dif_rand_params 00:38:52.545 ************************************ 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 bdev_null0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.545 [2024-07-15 08:05:43.432163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:52.545 { 00:38:52.545 "params": { 00:38:52.545 "name": "Nvme$subsystem", 00:38:52.545 "trtype": "$TEST_TRANSPORT", 00:38:52.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.545 "adrfam": "ipv4", 00:38:52.545 "trsvcid": "$NVMF_PORT", 00:38:52.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.545 "hdgst": ${hdgst:-false}, 00:38:52.545 "ddgst": ${ddgst:-false} 00:38:52.545 }, 00:38:52.545 "method": "bdev_nvme_attach_controller" 00:38:52.545 } 00:38:52.545 EOF 00:38:52.545 )") 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:52.545 "params": { 00:38:52.545 "name": "Nvme0", 00:38:52.545 "trtype": "tcp", 00:38:52.545 "traddr": "10.0.0.2", 00:38:52.545 "adrfam": "ipv4", 00:38:52.545 "trsvcid": "4420", 00:38:52.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:52.545 "hdgst": false, 00:38:52.545 "ddgst": false 00:38:52.545 }, 00:38:52.545 "method": "bdev_nvme_attach_controller" 00:38:52.545 }' 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:52.545 08:05:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.545 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:52.545 ... 00:38:52.545 fio-3.35 00:38:52.545 Starting 3 threads 00:38:52.802 EAL: No free 2048 kB hugepages reported on node 1 00:38:59.382 00:38:59.382 filename0: (groupid=0, jobs=1): err= 0: pid=1266330: Mon Jul 15 08:05:49 2024 00:38:59.382 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(105MiB/5048msec) 00:38:59.382 slat (nsec): min=6380, max=72679, avg=21579.17, stdev=5000.54 00:38:59.382 clat (usec): min=6425, max=58013, avg=18085.70, stdev=12397.35 00:38:59.382 lat (usec): min=6441, max=58035, avg=18107.28, stdev=12397.40 00:38:59.382 clat percentiles (usec): 00:38:59.382 | 1.00th=[ 7308], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11731], 00:38:59.382 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14615], 60.00th=[15401], 00:38:59.382 | 70.00th=[16188], 80.00th=[17171], 90.00th=[47973], 95.00th=[54264], 00:38:59.382 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:38:59.382 | 99.99th=[57934] 00:38:59.382 bw ( KiB/s): min=16384, max=24064, per=30.75%, avg=21329.40, stdev=2721.18, samples=10 00:38:59.382 iops : min= 128, max= 188, avg=166.60, stdev=21.23, samples=10 00:38:59.382 lat (msec) : 10=6.46%, 20=82.78%, 50=1.20%, 100=9.57% 00:38:59.382 cpu : usr=93.74%, sys=5.75%, ctx=7, majf=0, minf=1639 00:38:59.382 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.382 issued rwts: total=836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:59.382 filename0: (groupid=0, jobs=1): err= 0: pid=1266331: Mon Jul 15 08:05:49 2024 00:38:59.382 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(119MiB/5009msec) 00:38:59.382 slat (nsec): min=6633, max=57987, avg=21924.62, stdev=4100.63 00:38:59.382 clat (usec): min=6261, max=58499, avg=15723.30, stdev=8473.31 00:38:59.382 lat (usec): min=6281, max=58524, avg=15745.22, stdev=8473.28 00:38:59.382 clat percentiles (usec): 00:38:59.382 | 1.00th=[ 7308], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10814], 00:38:59.382 | 30.00th=[11863], 40.00th=[13304], 50.00th=[14353], 60.00th=[15270], 00:38:59.382 | 70.00th=[16319], 80.00th=[18220], 90.00th=[20055], 95.00th=[22152], 00:38:59.382 | 99.00th=[55837], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:38:59.382 | 99.99th=[58459] 00:38:59.382 bw ( KiB/s): min=20992, max=27392, per=35.10%, avg=24345.60, stdev=1882.95, samples=10 00:38:59.382 iops : min= 164, max= 214, avg=190.20, stdev=14.71, samples=10 00:38:59.382 lat (msec) : 10=11.64%, 20=78.09%, 50=6.81%, 100=3.46% 00:38:59.382 cpu : usr=94.07%, sys=5.37%, ctx=10, majf=0, minf=1637 00:38:59.382 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.382 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:59.382 filename0: (groupid=0, jobs=1): err= 0: pid=1266332: Mon Jul 15 08:05:49 2024 00:38:59.382 read: IOPS=187, BW=23.4MiB/s (24.6MB/s)(118MiB/5049msec) 00:38:59.382 slat (nsec): min=5915, max=51210, avg=22583.53, stdev=4005.89 00:38:59.382 clat (usec): min=5767, max=59551, avg=15936.38, stdev=9630.29 00:38:59.382 lat (usec): min=5787, max=59572, avg=15958.96, stdev=9630.46 00:38:59.382 clat percentiles (usec): 00:38:59.382 | 1.00th=[ 6390], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10552], 00:38:59.382 | 30.00th=[11863], 40.00th=[13173], 50.00th=[14091], 60.00th=[15139], 00:38:59.382 | 70.00th=[16188], 80.00th=[17433], 90.00th=[19268], 95.00th=[49546], 00:38:59.382 | 99.00th=[54789], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:38:59.382 | 99.99th=[59507] 00:38:59.382 bw ( KiB/s): min=18688, max=31488, per=34.80%, avg=24140.80, stdev=3678.35, samples=10 00:38:59.382 iops : min= 146, max= 246, avg=188.60, stdev=28.74, samples=10 00:38:59.382 lat (msec) : 10=15.64%, 20=75.58%, 50=3.91%, 100=4.86% 00:38:59.382 cpu : usr=93.32%, sys=5.39%, ctx=274, majf=0, minf=1634 00:38:59.382 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.382 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:59.382 00:38:59.382 Run status group 0 (all jobs): 00:38:59.382 READ: bw=67.7MiB/s (71.0MB/s), 20.7MiB/s-23.8MiB/s (21.7MB/s-25.0MB/s), io=342MiB (359MB), run=5009-5049msec 00:38:59.640 ----------------------------------------------------- 00:38:59.640 Suppressions used: 00:38:59.640 count bytes template 00:38:59.640 5 44 /usr/src/fio/parse.c 00:38:59.640 1 8 libtcmalloc_minimal.so 00:38:59.640 1 904 libcrypto.so 00:38:59.640 ----------------------------------------------------- 00:38:59.640 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.640 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 bdev_null0 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 [2024-07-15 08:05:50.735700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 bdev_null1 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 bdev_null2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.641 { 00:38:59.641 "params": { 00:38:59.641 "name": "Nvme$subsystem", 00:38:59.641 "trtype": "$TEST_TRANSPORT", 00:38:59.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.641 "adrfam": "ipv4", 00:38:59.641 "trsvcid": "$NVMF_PORT", 00:38:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.641 "hdgst": ${hdgst:-false}, 00:38:59.641 "ddgst": ${ddgst:-false} 00:38:59.641 }, 00:38:59.641 "method": "bdev_nvme_attach_controller" 00:38:59.641 } 00:38:59.641 EOF 00:38:59.641 )") 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.641 { 00:38:59.641 "params": { 00:38:59.641 "name": "Nvme$subsystem", 00:38:59.641 "trtype": "$TEST_TRANSPORT", 00:38:59.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.641 "adrfam": "ipv4", 00:38:59.641 "trsvcid": "$NVMF_PORT", 00:38:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.641 "hdgst": ${hdgst:-false}, 00:38:59.641 "ddgst": ${ddgst:-false} 00:38:59.641 }, 00:38:59.641 "method": "bdev_nvme_attach_controller" 00:38:59.641 } 00:38:59.641 EOF 00:38:59.641 )") 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.641 { 00:38:59.641 "params": { 00:38:59.641 "name": "Nvme$subsystem", 00:38:59.641 "trtype": "$TEST_TRANSPORT", 00:38:59.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.641 "adrfam": "ipv4", 00:38:59.641 "trsvcid": "$NVMF_PORT", 00:38:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.641 "hdgst": ${hdgst:-false}, 00:38:59.641 "ddgst": ${ddgst:-false} 00:38:59.641 }, 00:38:59.641 "method": "bdev_nvme_attach_controller" 00:38:59.641 } 00:38:59.641 EOF 00:38:59.641 )") 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:59.641 08:05:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:59.641 "params": { 00:38:59.641 "name": "Nvme0", 00:38:59.641 "trtype": "tcp", 00:38:59.641 "traddr": "10.0.0.2", 00:38:59.641 "adrfam": "ipv4", 00:38:59.641 "trsvcid": "4420", 00:38:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:59.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:59.642 "hdgst": false, 00:38:59.642 "ddgst": false 00:38:59.642 }, 00:38:59.642 "method": "bdev_nvme_attach_controller" 00:38:59.642 },{ 00:38:59.642 "params": { 00:38:59.642 "name": "Nvme1", 00:38:59.642 "trtype": "tcp", 00:38:59.642 "traddr": "10.0.0.2", 00:38:59.642 "adrfam": "ipv4", 00:38:59.642 "trsvcid": "4420", 00:38:59.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.642 "hdgst": false, 00:38:59.642 "ddgst": false 00:38:59.642 }, 00:38:59.642 "method": "bdev_nvme_attach_controller" 00:38:59.642 },{ 00:38:59.642 "params": { 00:38:59.642 "name": "Nvme2", 00:38:59.642 "trtype": "tcp", 00:38:59.642 "traddr": "10.0.0.2", 00:38:59.642 "adrfam": "ipv4", 00:38:59.642 "trsvcid": "4420", 00:38:59.642 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:59.642 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:59.642 "hdgst": false, 00:38:59.642 "ddgst": false 00:38:59.642 }, 00:38:59.642 "method": "bdev_nvme_attach_controller" 00:38:59.642 }' 00:38:59.642 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:59.642 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:59.642 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:59.642 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:59.642 08:05:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.900 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.900 ... 00:38:59.900 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.900 ... 00:38:59.900 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.900 ... 00:38:59.900 fio-3.35 00:38:59.900 Starting 24 threads 00:39:00.159 EAL: No free 2048 kB hugepages reported on node 1 00:39:12.359 00:39:12.359 filename0: (groupid=0, jobs=1): err= 0: pid=1267267: Mon Jul 15 08:06:02 2024 00:39:12.359 read: IOPS=357, BW=1430KiB/s (1465kB/s)(14.1MiB/10073msec) 00:39:12.359 slat (nsec): min=10826, max=89221, avg=26326.96, stdev=17318.89 00:39:12.359 clat (msec): min=25, max=111, avg=44.40, stdev= 9.92 00:39:12.359 lat (msec): min=25, max=111, avg=44.43, stdev= 9.92 00:39:12.359 clat percentiles (msec): 00:39:12.359 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 43], 00:39:12.359 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.359 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 48], 95.00th=[ 54], 00:39:12.359 | 99.00th=[ 103], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 112], 00:39:12.359 | 99.99th=[ 112] 00:39:12.359 bw ( KiB/s): min= 1024, max= 1600, per=4.29%, avg=1432.90, stdev=126.87, samples=20 00:39:12.359 iops : min= 256, max= 400, avg=358.20, stdev=31.75, samples=20 00:39:12.359 lat (msec) : 50=93.78%, 100=5.05%, 250=1.17% 00:39:12.359 cpu : usr=98.18%, sys=1.32%, ctx=30, majf=0, minf=1635 00:39:12.359 IO depths : 1=3.7%, 2=7.5%, 4=16.7%, 8=62.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:39:12.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 complete : 0=0.0%, 4=91.9%, 8=3.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 issued rwts: total=3602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.359 filename0: (groupid=0, jobs=1): err= 0: pid=1267268: Mon Jul 15 08:06:02 2024 00:39:12.359 read: IOPS=343, BW=1372KiB/s (1405kB/s)(13.5MiB/10074msec) 00:39:12.359 slat (nsec): min=5588, max=80860, avg=31472.74, stdev=9400.79 00:39:12.359 clat (msec): min=37, max=172, avg=46.28, stdev= 9.86 00:39:12.359 lat (msec): min=37, max=172, avg=46.32, stdev= 9.86 00:39:12.359 clat percentiles (msec): 00:39:12.359 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.359 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.359 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.359 | 99.00th=[ 79], 99.50th=[ 113], 99.90th=[ 174], 99.95th=[ 174], 00:39:12.359 | 99.99th=[ 174] 00:39:12.359 bw ( KiB/s): min= 1152, max= 1536, per=4.12%, avg=1376.00, stdev=91.69, samples=20 00:39:12.359 iops : min= 288, max= 384, avg=344.00, stdev=22.92, samples=20 00:39:12.359 lat (msec) : 50=96.79%, 100=2.69%, 250=0.52% 00:39:12.359 cpu : usr=98.05%, sys=1.46%, ctx=15, majf=0, minf=1636 00:39:12.359 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.359 filename0: (groupid=0, jobs=1): err= 0: pid=1267269: Mon Jul 15 08:06:02 2024 00:39:12.359 read: IOPS=343, BW=1373KiB/s (1406kB/s)(13.5MiB/10067msec) 00:39:12.359 slat (nsec): min=13814, max=93134, avg=39627.26, stdev=11196.39 00:39:12.359 clat (msec): min=32, max=166, avg=46.24, stdev= 9.54 00:39:12.359 lat (msec): min=32, max=166, avg=46.28, stdev= 9.54 00:39:12.359 clat percentiles (msec): 00:39:12.359 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.359 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.359 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.359 | 99.00th=[ 86], 99.50th=[ 95], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.359 | 99.99th=[ 167] 00:39:12.359 bw ( KiB/s): min= 1152, max= 1536, per=4.12%, avg=1376.00, stdev=91.69, samples=20 00:39:12.359 iops : min= 288, max= 384, avg=344.00, stdev=22.92, samples=20 00:39:12.359 lat (msec) : 50=96.64%, 100=2.89%, 250=0.46% 00:39:12.359 cpu : usr=97.87%, sys=1.60%, ctx=17, majf=0, minf=1635 00:39:12.359 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.359 filename0: (groupid=0, jobs=1): err= 0: pid=1267270: Mon Jul 15 08:06:02 2024 00:39:12.359 read: IOPS=346, BW=1387KiB/s (1420kB/s)(13.6MiB/10013msec) 00:39:12.359 slat (nsec): min=9000, max=98045, avg=42732.65, stdev=13599.83 00:39:12.359 clat (msec): min=27, max=167, avg=45.76, stdev= 8.71 00:39:12.359 lat (msec): min=28, max=167, avg=45.80, stdev= 8.71 00:39:12.359 clat percentiles (msec): 00:39:12.359 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.359 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.359 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.359 | 99.00th=[ 61], 99.50th=[ 73], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.359 | 99.99th=[ 167] 00:39:12.359 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1387.79, stdev=75.77, samples=19 00:39:12.359 iops : min= 288, max= 384, avg=346.95, stdev=18.94, samples=19 00:39:12.359 lat (msec) : 50=97.06%, 100=2.48%, 250=0.46% 00:39:12.359 cpu : usr=92.35%, sys=3.98%, ctx=385, majf=0, minf=1634 00:39:12.359 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:12.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.359 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.359 filename0: (groupid=0, jobs=1): err= 0: pid=1267271: Mon Jul 15 08:06:02 2024 00:39:12.359 read: IOPS=378, BW=1514KiB/s (1550kB/s)(14.9MiB/10061msec) 00:39:12.359 slat (nsec): min=10919, max=68807, avg=19258.14, stdev=8789.32 00:39:12.359 clat (msec): min=15, max=172, avg=42.09, stdev=11.99 00:39:12.359 lat (msec): min=15, max=172, avg=42.11, stdev=11.99 00:39:12.359 clat percentiles (msec): 00:39:12.359 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 34], 00:39:12.359 | 30.00th=[ 38], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:39:12.359 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 57], 00:39:12.359 | 99.00th=[ 84], 99.50th=[ 104], 99.90th=[ 174], 99.95th=[ 174], 00:39:12.359 | 99.99th=[ 174] 00:39:12.359 bw ( KiB/s): min= 1072, max= 1856, per=4.53%, avg=1515.45, stdev=169.99, samples=20 00:39:12.359 iops : min= 268, max= 464, avg=378.80, stdev=42.55, samples=20 00:39:12.359 lat (msec) : 20=0.53%, 50=90.99%, 100=7.90%, 250=0.58% 00:39:12.359 cpu : usr=98.15%, sys=1.37%, ctx=11, majf=0, minf=1636 00:39:12.359 IO depths : 1=1.6%, 2=3.4%, 4=10.0%, 8=72.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:39:12.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=90.2%, 8=5.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename0: (groupid=0, jobs=1): err= 0: pid=1267272: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=345, BW=1381KiB/s (1414kB/s)(13.5MiB/10012msec) 00:39:12.360 slat (nsec): min=12506, max=91010, avg=39663.19, stdev=11530.24 00:39:12.360 clat (msec): min=32, max=167, avg=46.01, stdev= 8.86 00:39:12.360 lat (msec): min=32, max=167, avg=46.05, stdev= 8.86 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 65], 99.50th=[ 80], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.360 | 99.99th=[ 167] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1536, per=4.13%, avg=1381.05, stdev=80.72, samples=19 00:39:12.360 iops : min= 288, max= 384, avg=345.26, stdev=20.18, samples=19 00:39:12.360 lat (msec) : 50=97.11%, 100=2.43%, 250=0.46% 00:39:12.360 cpu : usr=98.18%, sys=1.32%, ctx=14, majf=0, minf=1633 00:39:12.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename0: (groupid=0, jobs=1): err= 0: pid=1267273: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=346, BW=1387KiB/s (1421kB/s)(13.6MiB/10011msec) 00:39:12.360 slat (nsec): min=11777, max=79821, avg=34816.43, stdev=11946.68 00:39:12.360 clat (msec): min=25, max=102, avg=45.85, stdev= 5.86 00:39:12.360 lat (msec): min=26, max=102, avg=45.88, stdev= 5.86 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 65], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 103], 00:39:12.360 | 99.99th=[ 103] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1387.89, stdev=77.10, samples=19 00:39:12.360 iops : min= 288, max= 384, avg=346.95, stdev=19.27, samples=19 00:39:12.360 lat (msec) : 50=96.66%, 100=2.88%, 250=0.46% 00:39:12.360 cpu : usr=97.98%, sys=1.52%, ctx=21, majf=0, minf=1637 00:39:12.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename0: (groupid=0, jobs=1): err= 0: pid=1267274: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=342, BW=1371KiB/s (1404kB/s)(13.5MiB/10080msec) 00:39:12.360 slat (nsec): min=11472, max=89908, avg=40442.57, stdev=15173.53 00:39:12.360 clat (msec): min=37, max=177, avg=46.30, stdev=10.23 00:39:12.360 lat (msec): min=37, max=177, avg=46.34, stdev=10.23 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 171], 99.95th=[ 178], 00:39:12.360 | 99.99th=[ 178] 00:39:12.360 bw ( KiB/s): min= 1024, max= 1536, per=4.11%, avg=1373.25, stdev=116.39, samples=20 00:39:12.360 iops : min= 256, max= 384, avg=343.30, stdev=29.10, samples=20 00:39:12.360 lat (msec) : 50=96.82%, 100=2.72%, 250=0.46% 00:39:12.360 cpu : usr=98.10%, sys=1.39%, ctx=19, majf=0, minf=1636 00:39:12.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267275: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=343, BW=1373KiB/s (1406kB/s)(13.5MiB/10065msec) 00:39:12.360 slat (usec): min=13, max=106, avg=39.86, stdev=11.18 00:39:12.360 clat (msec): min=30, max=166, avg=46.23, stdev= 9.56 00:39:12.360 lat (msec): min=30, max=166, avg=46.27, stdev= 9.56 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 84], 99.50th=[ 95], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.360 | 99.99th=[ 167] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1536, per=4.12%, avg=1376.10, stdev=91.58, samples=20 00:39:12.360 iops : min= 288, max= 384, avg=344.00, stdev=22.92, samples=20 00:39:12.360 lat (msec) : 50=96.53%, 100=3.01%, 250=0.46% 00:39:12.360 cpu : usr=98.04%, sys=1.47%, ctx=14, majf=0, minf=1634 00:39:12.360 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267276: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=357, BW=1431KiB/s (1465kB/s)(14.0MiB/10036msec) 00:39:12.360 slat (nsec): min=8630, max=85936, avg=28887.30, stdev=12124.07 00:39:12.360 clat (msec): min=2, max=111, avg=44.49, stdev= 8.63 00:39:12.360 lat (msec): min=2, max=111, avg=44.52, stdev= 8.63 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 65], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 112], 00:39:12.360 | 99.99th=[ 112] 00:39:12.360 bw ( KiB/s): min= 1152, max= 2224, per=4.28%, avg=1429.45, stdev=201.49, samples=20 00:39:12.360 iops : min= 288, max= 556, avg=357.35, stdev=50.37, samples=20 00:39:12.360 lat (msec) : 4=0.45%, 10=1.53%, 20=0.25%, 50=94.87%, 100=2.40% 00:39:12.360 lat (msec) : 250=0.50% 00:39:12.360 cpu : usr=93.53%, sys=3.68%, ctx=203, majf=0, minf=1637 00:39:12.360 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267277: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=349, BW=1398KiB/s (1431kB/s)(13.7MiB/10029msec) 00:39:12.360 slat (usec): min=8, max=125, avg=61.67, stdev=16.71 00:39:12.360 clat (msec): min=10, max=167, avg=45.23, stdev= 9.17 00:39:12.360 lat (msec): min=10, max=167, avg=45.29, stdev= 9.17 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 29], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 59], 99.50th=[ 77], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.360 | 99.99th=[ 167] 00:39:12.360 bw ( KiB/s): min= 1024, max= 1536, per=4.18%, avg=1395.20, stdev=109.09, samples=20 00:39:12.360 iops : min= 256, max= 384, avg=348.80, stdev=27.27, samples=20 00:39:12.360 lat (msec) : 20=0.91%, 50=96.63%, 100=2.00%, 250=0.46% 00:39:12.360 cpu : usr=98.05%, sys=1.40%, ctx=18, majf=0, minf=1635 00:39:12.360 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267278: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=345, BW=1382KiB/s (1415kB/s)(13.6MiB/10098msec) 00:39:12.360 slat (nsec): min=8350, max=93233, avg=25125.14, stdev=10273.16 00:39:12.360 clat (msec): min=28, max=172, avg=46.10, stdev= 9.62 00:39:12.360 lat (msec): min=28, max=172, avg=46.12, stdev= 9.62 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 63], 99.50th=[ 100], 99.90th=[ 174], 99.95th=[ 174], 00:39:12.360 | 99.99th=[ 174] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1388.50, stdev=84.70, samples=20 00:39:12.360 iops : min= 288, max= 384, avg=347.10, stdev=21.17, samples=20 00:39:12.360 lat (msec) : 50=96.96%, 100=2.58%, 250=0.46% 00:39:12.360 cpu : usr=97.23%, sys=1.84%, ctx=168, majf=0, minf=1637 00:39:12.360 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267279: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=344, BW=1378KiB/s (1411kB/s)(13.5MiB/10031msec) 00:39:12.360 slat (nsec): min=11390, max=95100, avg=29342.04, stdev=20588.61 00:39:12.360 clat (msec): min=23, max=111, avg=46.17, stdev= 6.95 00:39:12.360 lat (msec): min=23, max=111, avg=46.20, stdev= 6.95 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 50], 00:39:12.360 | 99.00th=[ 78], 99.50th=[ 103], 99.90th=[ 111], 99.95th=[ 111], 00:39:12.360 | 99.99th=[ 111] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1520, per=4.13%, avg=1380.89, stdev=79.69, samples=19 00:39:12.360 iops : min= 288, max= 380, avg=345.21, stdev=19.94, samples=19 00:39:12.360 lat (msec) : 50=96.24%, 100=2.84%, 250=0.93% 00:39:12.360 cpu : usr=98.08%, sys=1.43%, ctx=12, majf=0, minf=1636 00:39:12.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267280: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=362, BW=1449KiB/s (1484kB/s)(14.2MiB/10067msec) 00:39:12.360 slat (nsec): min=11181, max=97963, avg=29930.90, stdev=17698.81 00:39:12.360 clat (msec): min=14, max=172, avg=43.93, stdev=11.42 00:39:12.360 lat (msec): min=14, max=172, avg=43.96, stdev=11.42 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 25], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 41], 00:39:12.360 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 48], 95.00th=[ 55], 00:39:12.360 | 99.00th=[ 75], 99.50th=[ 91], 99.90th=[ 174], 99.95th=[ 174], 00:39:12.360 | 99.99th=[ 174] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1760, per=4.34%, avg=1451.50, stdev=146.95, samples=20 00:39:12.360 iops : min= 288, max= 440, avg=362.85, stdev=36.77, samples=20 00:39:12.360 lat (msec) : 20=0.44%, 50=92.46%, 100=6.61%, 250=0.49% 00:39:12.360 cpu : usr=98.14%, sys=1.35%, ctx=13, majf=0, minf=1636 00:39:12.360 IO depths : 1=1.8%, 2=6.0%, 4=18.0%, 8=62.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=92.5%, 8=2.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267281: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=342, BW=1372KiB/s (1404kB/s)(13.5MiB/10079msec) 00:39:12.360 slat (usec): min=5, max=103, avg=38.43, stdev=12.47 00:39:12.360 clat (msec): min=33, max=172, avg=46.30, stdev= 9.83 00:39:12.360 lat (msec): min=33, max=172, avg=46.34, stdev= 9.83 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 84], 99.50th=[ 94], 99.90th=[ 174], 99.95th=[ 174], 00:39:12.360 | 99.99th=[ 174] 00:39:12.360 bw ( KiB/s): min= 1024, max= 1536, per=4.12%, avg=1375.30, stdev=100.47, samples=20 00:39:12.360 iops : min= 256, max= 384, avg=343.80, stdev=25.11, samples=20 00:39:12.360 lat (msec) : 50=95.72%, 100=3.82%, 250=0.46% 00:39:12.360 cpu : usr=95.40%, sys=2.61%, ctx=82, majf=0, minf=1636 00:39:12.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename1: (groupid=0, jobs=1): err= 0: pid=1267282: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=345, BW=1383KiB/s (1416kB/s)(13.6MiB/10043msec) 00:39:12.360 slat (usec): min=11, max=224, avg=35.38, stdev=11.02 00:39:12.360 clat (msec): min=30, max=102, avg=45.87, stdev= 6.08 00:39:12.360 lat (msec): min=30, max=102, avg=45.90, stdev= 6.08 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 49], 00:39:12.360 | 99.00th=[ 74], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 103], 00:39:12.360 | 99.99th=[ 103] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1386.70, stdev=75.42, samples=20 00:39:12.360 iops : min= 288, max= 384, avg=346.65, stdev=18.85, samples=20 00:39:12.360 lat (msec) : 50=95.85%, 100=3.69%, 250=0.46% 00:39:12.360 cpu : usr=95.76%, sys=2.64%, ctx=38, majf=0, minf=1637 00:39:12.360 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename2: (groupid=0, jobs=1): err= 0: pid=1267283: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=345, BW=1382KiB/s (1415kB/s)(13.5MiB/10003msec) 00:39:12.360 slat (usec): min=13, max=102, avg=40.48, stdev=10.52 00:39:12.360 clat (msec): min=33, max=167, avg=45.95, stdev= 8.72 00:39:12.360 lat (msec): min=33, max=167, avg=45.99, stdev= 8.72 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.360 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.360 | 99.00th=[ 65], 99.50th=[ 70], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.360 | 99.99th=[ 167] 00:39:12.360 bw ( KiB/s): min= 1152, max= 1536, per=4.13%, avg=1381.05, stdev=80.72, samples=19 00:39:12.360 iops : min= 288, max= 384, avg=345.26, stdev=20.18, samples=19 00:39:12.360 lat (msec) : 50=97.11%, 100=2.43%, 250=0.46% 00:39:12.360 cpu : usr=95.73%, sys=2.54%, ctx=104, majf=0, minf=1634 00:39:12.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.360 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.360 filename2: (groupid=0, jobs=1): err= 0: pid=1267284: Mon Jul 15 08:06:02 2024 00:39:12.360 read: IOPS=358, BW=1435KiB/s (1470kB/s)(14.1MiB/10066msec) 00:39:12.360 slat (usec): min=15, max=369, avg=58.50, stdev=13.45 00:39:12.360 clat (msec): min=14, max=166, avg=44.23, stdev=10.47 00:39:12.360 lat (msec): min=14, max=166, avg=44.29, stdev=10.47 00:39:12.360 clat percentiles (msec): 00:39:12.360 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 43], 00:39:12.360 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.360 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 48], 95.00th=[ 58], 00:39:12.360 | 99.00th=[ 90], 99.50th=[ 107], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.360 | 99.99th=[ 167] 00:39:12.360 bw ( KiB/s): min= 1128, max= 1792, per=4.30%, avg=1437.20, stdev=147.94, samples=20 00:39:12.360 iops : min= 282, max= 448, avg=359.25, stdev=37.06, samples=20 00:39:12.360 lat (msec) : 20=0.33%, 50=91.64%, 100=7.50%, 250=0.53% 00:39:12.360 cpu : usr=97.99%, sys=1.46%, ctx=13, majf=0, minf=1634 00:39:12.361 IO depths : 1=2.1%, 2=4.9%, 4=12.4%, 8=68.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=91.3%, 8=4.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 filename2: (groupid=0, jobs=1): err= 0: pid=1267286: Mon Jul 15 08:06:02 2024 00:39:12.361 read: IOPS=349, BW=1400KiB/s (1433kB/s)(13.7MiB/10015msec) 00:39:12.361 slat (nsec): min=8618, max=84124, avg=25659.87, stdev=12351.02 00:39:12.361 clat (msec): min=9, max=160, avg=45.50, stdev= 8.65 00:39:12.361 lat (msec): min=10, max=160, avg=45.52, stdev= 8.65 00:39:12.361 clat percentiles (msec): 00:39:12.361 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.361 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.361 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.361 | 99.00th=[ 62], 99.50th=[ 63], 99.90th=[ 161], 99.95th=[ 161], 00:39:12.361 | 99.99th=[ 161] 00:39:12.361 bw ( KiB/s): min= 1024, max= 1536, per=4.18%, avg=1395.30, stdev=109.10, samples=20 00:39:12.361 iops : min= 256, max= 384, avg=348.80, stdev=27.27, samples=20 00:39:12.361 lat (msec) : 10=0.03%, 20=0.43%, 50=97.20%, 100=1.88%, 250=0.46% 00:39:12.361 cpu : usr=97.93%, sys=1.40%, ctx=57, majf=0, minf=1637 00:39:12.361 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 filename2: (groupid=0, jobs=1): err= 0: pid=1267287: Mon Jul 15 08:06:02 2024 00:39:12.361 read: IOPS=342, BW=1372KiB/s (1405kB/s)(13.5MiB/10077msec) 00:39:12.361 slat (usec): min=4, max=102, avg=44.12, stdev=15.23 00:39:12.361 clat (msec): min=31, max=166, avg=46.23, stdev= 9.76 00:39:12.361 lat (msec): min=31, max=166, avg=46.28, stdev= 9.76 00:39:12.361 clat percentiles (msec): 00:39:12.361 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.361 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.361 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.361 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.361 | 99.99th=[ 167] 00:39:12.361 bw ( KiB/s): min= 1024, max= 1536, per=4.12%, avg=1375.55, stdev=116.43, samples=20 00:39:12.361 iops : min= 256, max= 384, avg=343.85, stdev=29.10, samples=20 00:39:12.361 lat (msec) : 50=96.70%, 100=2.84%, 250=0.46% 00:39:12.361 cpu : usr=96.10%, sys=2.22%, ctx=96, majf=0, minf=1634 00:39:12.361 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 filename2: (groupid=0, jobs=1): err= 0: pid=1267288: Mon Jul 15 08:06:02 2024 00:39:12.361 read: IOPS=345, BW=1382KiB/s (1415kB/s)(13.5MiB/10003msec) 00:39:12.361 slat (nsec): min=11367, max=92076, avg=39243.35, stdev=12286.63 00:39:12.361 clat (msec): min=30, max=167, avg=45.96, stdev= 8.73 00:39:12.361 lat (msec): min=30, max=167, avg=46.00, stdev= 8.73 00:39:12.361 clat percentiles (msec): 00:39:12.361 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.361 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.361 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.361 | 99.00th=[ 65], 99.50th=[ 80], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.361 | 99.99th=[ 167] 00:39:12.361 bw ( KiB/s): min= 1152, max= 1536, per=4.13%, avg=1381.05, stdev=80.72, samples=19 00:39:12.361 iops : min= 288, max= 384, avg=345.26, stdev=20.18, samples=19 00:39:12.361 lat (msec) : 50=97.11%, 100=2.43%, 250=0.46% 00:39:12.361 cpu : usr=97.10%, sys=1.89%, ctx=198, majf=0, minf=1636 00:39:12.361 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 filename2: (groupid=0, jobs=1): err= 0: pid=1267290: Mon Jul 15 08:06:02 2024 00:39:12.361 read: IOPS=364, BW=1459KiB/s (1494kB/s)(14.3MiB/10065msec) 00:39:12.361 slat (usec): min=8, max=254, avg=28.15, stdev=13.67 00:39:12.361 clat (msec): min=14, max=166, avg=43.57, stdev=11.53 00:39:12.361 lat (msec): min=14, max=166, avg=43.60, stdev=11.53 00:39:12.361 clat percentiles (msec): 00:39:12.361 | 1.00th=[ 25], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 40], 00:39:12.361 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:39:12.361 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 51], 00:39:12.361 | 99.00th=[ 84], 99.50th=[ 114], 99.90th=[ 167], 99.95th=[ 167], 00:39:12.361 | 99.99th=[ 167] 00:39:12.361 bw ( KiB/s): min= 1136, max= 1792, per=4.38%, avg=1464.10, stdev=150.37, samples=20 00:39:12.361 iops : min= 284, max= 448, avg=366.00, stdev=37.62, samples=20 00:39:12.361 lat (msec) : 20=0.46%, 50=94.47%, 100=4.36%, 250=0.71% 00:39:12.361 cpu : usr=95.25%, sys=2.64%, ctx=109, majf=0, minf=1636 00:39:12.361 IO depths : 1=2.6%, 2=5.6%, 4=13.5%, 8=66.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=91.4%, 8=4.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 filename2: (groupid=0, jobs=1): err= 0: pid=1267291: Mon Jul 15 08:06:02 2024 00:39:12.361 read: IOPS=343, BW=1372KiB/s (1405kB/s)(13.5MiB/10074msec) 00:39:12.361 slat (nsec): min=11630, max=92250, avg=37586.70, stdev=13740.07 00:39:12.361 clat (msec): min=37, max=172, avg=46.24, stdev= 9.89 00:39:12.361 lat (msec): min=37, max=172, avg=46.27, stdev= 9.89 00:39:12.361 clat percentiles (msec): 00:39:12.361 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.361 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.361 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.361 | 99.00th=[ 79], 99.50th=[ 113], 99.90th=[ 174], 99.95th=[ 174], 00:39:12.361 | 99.99th=[ 174] 00:39:12.361 bw ( KiB/s): min= 1152, max= 1536, per=4.12%, avg=1376.00, stdev=91.69, samples=20 00:39:12.361 iops : min= 288, max= 384, avg=344.00, stdev=22.92, samples=20 00:39:12.361 lat (msec) : 50=96.82%, 100=2.66%, 250=0.52% 00:39:12.361 cpu : usr=98.33%, sys=1.17%, ctx=17, majf=0, minf=1634 00:39:12.361 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 filename2: (groupid=0, jobs=1): err= 0: pid=1267292: Mon Jul 15 08:06:02 2024 00:39:12.361 read: IOPS=345, BW=1382KiB/s (1415kB/s)(13.6MiB/10096msec) 00:39:12.361 slat (usec): min=6, max=100, avg=36.06, stdev=14.94 00:39:12.361 clat (msec): min=27, max=160, avg=45.99, stdev= 8.97 00:39:12.361 lat (msec): min=27, max=160, avg=46.02, stdev= 8.97 00:39:12.361 clat percentiles (msec): 00:39:12.361 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:39:12.361 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:39:12.361 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 47], 95.00th=[ 48], 00:39:12.361 | 99.00th=[ 63], 99.50th=[ 100], 99.90th=[ 161], 99.95th=[ 161], 00:39:12.361 | 99.99th=[ 161] 00:39:12.361 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1388.80, stdev=85.87, samples=20 00:39:12.361 iops : min= 288, max= 384, avg=347.20, stdev=21.47, samples=20 00:39:12.361 lat (msec) : 50=96.59%, 100=2.95%, 250=0.46% 00:39:12.361 cpu : usr=92.66%, sys=3.94%, ctx=368, majf=0, minf=1636 00:39:12.361 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:12.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.361 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:12.361 00:39:12.361 Run status group 0 (all jobs): 00:39:12.361 READ: bw=32.6MiB/s (34.2MB/s), 1371KiB/s-1514KiB/s (1404kB/s-1550kB/s), io=329MiB (345MB), run=10003-10098msec 00:39:12.925 ----------------------------------------------------- 00:39:12.925 Suppressions used: 00:39:12.925 count bytes template 00:39:12.925 45 402 /usr/src/fio/parse.c 00:39:12.925 1 8 libtcmalloc_minimal.so 00:39:12.925 1 904 libcrypto.so 00:39:12.925 ----------------------------------------------------- 00:39:12.925 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 bdev_null0 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 [2024-07-15 08:06:04.053220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 bdev_null1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:12.925 { 00:39:12.925 "params": { 00:39:12.925 "name": "Nvme$subsystem", 00:39:12.925 "trtype": "$TEST_TRANSPORT", 00:39:12.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.925 "adrfam": "ipv4", 00:39:12.925 "trsvcid": "$NVMF_PORT", 00:39:12.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.925 "hdgst": ${hdgst:-false}, 00:39:12.925 "ddgst": ${ddgst:-false} 00:39:12.925 }, 00:39:12.925 "method": "bdev_nvme_attach_controller" 00:39:12.925 } 00:39:12.925 EOF 00:39:12.925 )") 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:12.925 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:12.926 { 00:39:12.926 "params": { 00:39:12.926 "name": "Nvme$subsystem", 00:39:12.926 "trtype": "$TEST_TRANSPORT", 00:39:12.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.926 "adrfam": "ipv4", 00:39:12.926 "trsvcid": "$NVMF_PORT", 00:39:12.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.926 "hdgst": ${hdgst:-false}, 00:39:12.926 "ddgst": ${ddgst:-false} 00:39:12.926 }, 00:39:12.926 "method": "bdev_nvme_attach_controller" 00:39:12.926 } 00:39:12.926 EOF 00:39:12.926 )") 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:12.926 "params": { 00:39:12.926 "name": "Nvme0", 00:39:12.926 "trtype": "tcp", 00:39:12.926 "traddr": "10.0.0.2", 00:39:12.926 "adrfam": "ipv4", 00:39:12.926 "trsvcid": "4420", 00:39:12.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.926 "hdgst": false, 00:39:12.926 "ddgst": false 00:39:12.926 }, 00:39:12.926 "method": "bdev_nvme_attach_controller" 00:39:12.926 },{ 00:39:12.926 "params": { 00:39:12.926 "name": "Nvme1", 00:39:12.926 "trtype": "tcp", 00:39:12.926 "traddr": "10.0.0.2", 00:39:12.926 "adrfam": "ipv4", 00:39:12.926 "trsvcid": "4420", 00:39:12.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:12.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:12.926 "hdgst": false, 00:39:12.926 "ddgst": false 00:39:12.926 }, 00:39:12.926 "method": "bdev_nvme_attach_controller" 00:39:12.926 }' 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:12.926 08:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.184 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:13.184 ... 00:39:13.184 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:13.184 ... 00:39:13.184 fio-3.35 00:39:13.184 Starting 4 threads 00:39:13.441 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.033 00:39:20.033 filename0: (groupid=0, jobs=1): err= 0: pid=1268877: Mon Jul 15 08:06:10 2024 00:39:20.033 read: IOPS=1404, BW=11.0MiB/s (11.5MB/s)(54.9MiB/5003msec) 00:39:20.033 slat (nsec): min=7242, max=57965, avg=17520.58, stdev=5872.36 00:39:20.033 clat (usec): min=1053, max=10609, avg=5636.71, stdev=888.51 00:39:20.033 lat (usec): min=1072, max=10627, avg=5654.23, stdev=888.26 00:39:20.033 clat percentiles (usec): 00:39:20.033 | 1.00th=[ 3228], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 5276], 00:39:20.033 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5669], 00:39:20.033 | 70.00th=[ 5735], 80.00th=[ 5800], 90.00th=[ 6194], 95.00th=[ 7439], 00:39:20.033 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[10028], 99.95th=[10159], 00:39:20.033 | 99.99th=[10552] 00:39:20.033 bw ( KiB/s): min=10896, max=11616, per=24.70%, avg=11229.40, stdev=210.07, samples=10 00:39:20.033 iops : min= 1362, max= 1452, avg=1403.60, stdev=26.29, samples=10 00:39:20.033 lat (msec) : 2=0.28%, 4=2.22%, 10=97.41%, 20=0.09% 00:39:20.033 cpu : usr=92.00%, sys=7.22%, ctx=11, majf=0, minf=1636 00:39:20.033 IO depths : 1=0.2%, 2=11.7%, 4=60.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 issued rwts: total=7025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.033 filename0: (groupid=0, jobs=1): err= 0: pid=1268878: Mon Jul 15 08:06:10 2024 00:39:20.033 read: IOPS=1422, BW=11.1MiB/s (11.7MB/s)(55.6MiB/5003msec) 00:39:20.033 slat (nsec): min=6917, max=58261, avg=17531.58, stdev=5609.60 00:39:20.033 clat (usec): min=958, max=11123, avg=5559.70, stdev=855.98 00:39:20.033 lat (usec): min=976, max=11145, avg=5577.23, stdev=855.72 00:39:20.033 clat percentiles (usec): 00:39:20.033 | 1.00th=[ 3294], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5211], 00:39:20.033 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:39:20.033 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5997], 95.00th=[ 6980], 00:39:20.033 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10552], 99.95th=[10683], 00:39:20.033 | 99.99th=[11076] 00:39:20.033 bw ( KiB/s): min=10896, max=11952, per=25.03%, avg=11380.80, stdev=308.05, samples=10 00:39:20.033 iops : min= 1362, max= 1494, avg=1422.60, stdev=38.51, samples=10 00:39:20.033 lat (usec) : 1000=0.01% 00:39:20.033 lat (msec) : 2=0.29%, 4=2.28%, 10=97.11%, 20=0.31% 00:39:20.033 cpu : usr=92.64%, sys=6.58%, ctx=8, majf=0, minf=1636 00:39:20.033 IO depths : 1=0.1%, 2=15.0%, 4=57.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 issued rwts: total=7119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.033 filename1: (groupid=0, jobs=1): err= 0: pid=1268879: Mon Jul 15 08:06:10 2024 00:39:20.033 read: IOPS=1428, BW=11.2MiB/s (11.7MB/s)(55.9MiB/5004msec) 00:39:20.033 slat (nsec): min=7288, max=78132, avg=17531.93, stdev=6317.88 00:39:20.033 clat (usec): min=1052, max=13563, avg=5540.87, stdev=806.21 00:39:20.033 lat (usec): min=1071, max=13613, avg=5558.40, stdev=806.08 00:39:20.033 clat percentiles (usec): 00:39:20.033 | 1.00th=[ 3523], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5211], 00:39:20.033 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:39:20.033 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6652], 00:39:20.033 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[13304], 99.95th=[13566], 00:39:20.033 | 99.99th=[13566] 00:39:20.033 bw ( KiB/s): min=11008, max=12032, per=25.15%, avg=11433.60, stdev=329.30, samples=10 00:39:20.033 iops : min= 1376, max= 1504, avg=1429.20, stdev=41.16, samples=10 00:39:20.033 lat (msec) : 2=0.13%, 4=2.70%, 10=96.95%, 20=0.22% 00:39:20.033 cpu : usr=92.50%, sys=6.84%, ctx=17, majf=0, minf=1637 00:39:20.033 IO depths : 1=0.1%, 2=11.4%, 4=60.3%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 issued rwts: total=7149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.033 filename1: (groupid=0, jobs=1): err= 0: pid=1268881: Mon Jul 15 08:06:10 2024 00:39:20.033 read: IOPS=1427, BW=11.2MiB/s (11.7MB/s)(55.8MiB/5002msec) 00:39:20.033 slat (nsec): min=7587, max=59457, avg=18130.81, stdev=6005.95 00:39:20.033 clat (usec): min=1031, max=12939, avg=5536.38, stdev=876.45 00:39:20.033 lat (usec): min=1051, max=12962, avg=5554.51, stdev=876.37 00:39:20.033 clat percentiles (usec): 00:39:20.033 | 1.00th=[ 2704], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5211], 00:39:20.033 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:39:20.033 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 6063], 95.00th=[ 6915], 00:39:20.033 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[10028], 99.95th=[10159], 00:39:20.033 | 99.99th=[12911] 00:39:20.033 bw ( KiB/s): min=11008, max=12080, per=25.07%, avg=11397.33, stdev=331.98, samples=9 00:39:20.033 iops : min= 1376, max= 1510, avg=1424.67, stdev=41.50, samples=9 00:39:20.033 lat (msec) : 2=0.56%, 4=2.97%, 10=96.37%, 20=0.10% 00:39:20.033 cpu : usr=91.88%, sys=7.24%, ctx=8, majf=0, minf=1637 00:39:20.033 IO depths : 1=0.1%, 2=16.6%, 4=56.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.033 issued rwts: total=7142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:20.033 00:39:20.033 Run status group 0 (all jobs): 00:39:20.033 READ: bw=44.4MiB/s (46.6MB/s), 11.0MiB/s-11.2MiB/s (11.5MB/s-11.7MB/s), io=222MiB (233MB), run=5002-5004msec 00:39:20.645 ----------------------------------------------------- 00:39:20.645 Suppressions used: 00:39:20.645 count bytes template 00:39:20.645 6 52 /usr/src/fio/parse.c 00:39:20.645 1 8 libtcmalloc_minimal.so 00:39:20.645 1 904 libcrypto.so 00:39:20.645 ----------------------------------------------------- 00:39:20.645 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 00:39:20.645 real 0m28.345s 00:39:20.645 user 4m34.912s 00:39:20.645 sys 0m8.566s 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 ************************************ 00:39:20.645 END TEST fio_dif_rand_params 00:39:20.645 ************************************ 00:39:20.645 08:06:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:20.645 08:06:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:20.645 08:06:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:20.645 08:06:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 ************************************ 00:39:20.645 START TEST fio_dif_digest 00:39:20.645 ************************************ 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 bdev_null0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:20.645 [2024-07-15 08:06:11.830516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:20.645 { 00:39:20.645 "params": { 00:39:20.645 "name": "Nvme$subsystem", 00:39:20.645 "trtype": "$TEST_TRANSPORT", 00:39:20.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.645 "adrfam": "ipv4", 00:39:20.645 "trsvcid": "$NVMF_PORT", 00:39:20.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.645 "hdgst": ${hdgst:-false}, 00:39:20.645 "ddgst": ${ddgst:-false} 00:39:20.645 }, 00:39:20.645 "method": "bdev_nvme_attach_controller" 00:39:20.645 } 00:39:20.645 EOF 00:39:20.645 )") 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.645 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:20.646 "params": { 00:39:20.646 "name": "Nvme0", 00:39:20.646 "trtype": "tcp", 00:39:20.646 "traddr": "10.0.0.2", 00:39:20.646 "adrfam": "ipv4", 00:39:20.646 "trsvcid": "4420", 00:39:20.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:20.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:20.646 "hdgst": true, 00:39:20.646 "ddgst": true 00:39:20.646 }, 00:39:20.646 "method": "bdev_nvme_attach_controller" 00:39:20.646 }' 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:20.646 08:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:20.902 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:20.902 ... 00:39:20.902 fio-3.35 00:39:20.902 Starting 3 threads 00:39:21.159 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.349 00:39:33.349 filename0: (groupid=0, jobs=1): err= 0: pid=1270338: Mon Jul 15 08:06:23 2024 00:39:33.349 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10045msec) 00:39:33.349 slat (nsec): min=10532, max=59124, avg=22803.01, stdev=5566.40 00:39:33.349 clat (usec): min=11064, max=61323, avg=18097.94, stdev=3059.77 00:39:33.349 lat (usec): min=11085, max=61345, avg=18120.75, stdev=3059.89 00:39:33.349 clat percentiles (usec): 00:39:33.349 | 1.00th=[14222], 5.00th=[15926], 10.00th=[16450], 20.00th=[16909], 00:39:33.349 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:39:33.350 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:39:33.350 | 99.00th=[21890], 99.50th=[22414], 99.90th=[61080], 99.95th=[61080], 00:39:33.350 | 99.99th=[61080] 00:39:33.350 bw ( KiB/s): min=19200, max=22784, per=31.61%, avg=21220.30, stdev=831.51, samples=20 00:39:33.350 iops : min= 150, max= 178, avg=165.75, stdev= 6.52, samples=20 00:39:33.350 lat (msec) : 20=94.04%, 50=5.54%, 100=0.42% 00:39:33.350 cpu : usr=94.36%, sys=5.09%, ctx=38, majf=0, minf=1638 00:39:33.350 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.350 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.350 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.350 filename0: (groupid=0, jobs=1): err= 0: pid=1270339: Mon Jul 15 08:06:23 2024 00:39:33.350 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(229MiB/10007msec) 00:39:33.350 slat (nsec): min=7148, max=57014, avg=26486.30, stdev=5105.96 00:39:33.350 clat (usec): min=10032, max=59224, avg=16336.16, stdev=2688.63 00:39:33.350 lat (usec): min=10053, max=59247, avg=16362.65, stdev=2688.77 00:39:33.350 clat percentiles (usec): 00:39:33.350 | 1.00th=[12125], 5.00th=[14222], 10.00th=[14746], 20.00th=[15270], 00:39:33.350 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16319], 60.00th=[16581], 00:39:33.350 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:39:33.350 | 99.00th=[19268], 99.50th=[19530], 99.90th=[58459], 99.95th=[58983], 00:39:33.350 | 99.99th=[58983] 00:39:33.350 bw ( KiB/s): min=21248, max=24576, per=34.93%, avg=23449.60, stdev=848.65, samples=20 00:39:33.350 iops : min= 166, max= 192, avg=183.20, stdev= 6.63, samples=20 00:39:33.350 lat (msec) : 20=99.67%, 100=0.33% 00:39:33.350 cpu : usr=94.26%, sys=5.08%, ctx=56, majf=0, minf=1637 00:39:33.350 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.350 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.350 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.350 filename0: (groupid=0, jobs=1): err= 0: pid=1270340: Mon Jul 15 08:06:23 2024 00:39:33.350 read: IOPS=176, BW=22.1MiB/s (23.2MB/s)(222MiB/10048msec) 00:39:33.350 slat (nsec): min=5635, max=43529, avg=22443.76, stdev=3867.83 00:39:33.350 clat (usec): min=10165, max=56105, avg=16919.42, stdev=1856.02 00:39:33.350 lat (usec): min=10191, max=56132, avg=16941.86, stdev=1855.95 00:39:33.350 clat percentiles (usec): 00:39:33.350 | 1.00th=[11994], 5.00th=[14615], 10.00th=[15401], 20.00th=[15926], 00:39:33.350 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:39:33.350 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:39:33.350 | 99.00th=[19792], 99.50th=[20317], 99.90th=[50594], 99.95th=[56361], 00:39:33.350 | 99.99th=[56361] 00:39:33.350 bw ( KiB/s): min=21504, max=23808, per=33.83%, avg=22709.45, stdev=526.40, samples=20 00:39:33.350 iops : min= 168, max= 186, avg=177.40, stdev= 4.11, samples=20 00:39:33.350 lat (msec) : 20=99.32%, 50=0.56%, 100=0.11% 00:39:33.350 cpu : usr=94.84%, sys=4.58%, ctx=22, majf=0, minf=1636 00:39:33.350 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.350 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.350 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:33.350 00:39:33.350 Run status group 0 (all jobs): 00:39:33.350 READ: bw=65.6MiB/s (68.7MB/s), 20.7MiB/s-22.9MiB/s (21.7MB/s-24.0MB/s), io=659MiB (691MB), run=10007-10048msec 00:39:33.350 ----------------------------------------------------- 00:39:33.350 Suppressions used: 00:39:33.350 count bytes template 00:39:33.350 5 44 /usr/src/fio/parse.c 00:39:33.350 1 8 libtcmalloc_minimal.so 00:39:33.350 1 904 libcrypto.so 00:39:33.350 ----------------------------------------------------- 00:39:33.350 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.350 00:39:33.350 real 0m12.284s 00:39:33.350 user 0m30.522s 00:39:33.350 sys 0m1.948s 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:33.350 08:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:33.350 ************************************ 00:39:33.350 END TEST fio_dif_digest 00:39:33.350 ************************************ 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:33.350 08:06:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:33.350 08:06:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:33.350 rmmod nvme_tcp 00:39:33.350 rmmod nvme_fabrics 00:39:33.350 rmmod nvme_keyring 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1262935 ']' 00:39:33.350 08:06:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1262935 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1262935 ']' 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1262935 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1262935 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1262935' 00:39:33.350 killing process with pid 1262935 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1262935 00:39:33.350 08:06:24 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1262935 00:39:34.724 08:06:25 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:34.724 08:06:25 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:35.657 Waiting for block devices as requested 00:39:35.657 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:35.657 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:35.657 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:35.914 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:35.914 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:35.914 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.914 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:36.173 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:36.173 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:36.173 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:36.173 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:36.431 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:36.431 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:36.431 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:36.431 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:36.431 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:36.689 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:36.689 08:06:27 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:36.689 08:06:27 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:36.689 08:06:27 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:36.689 08:06:27 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:36.689 08:06:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.689 08:06:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:36.689 08:06:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.222 08:06:29 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:39.222 00:39:39.222 real 1m15.679s 00:39:39.222 user 6m44.023s 00:39:39.222 sys 0m19.832s 00:39:39.222 08:06:29 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:39.222 08:06:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:39.222 ************************************ 00:39:39.222 END TEST nvmf_dif 00:39:39.222 ************************************ 00:39:39.222 08:06:29 -- common/autotest_common.sh@1142 -- # return 0 00:39:39.222 08:06:29 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:39.222 08:06:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:39.222 08:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:39.222 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:39:39.222 ************************************ 00:39:39.222 START TEST nvmf_abort_qd_sizes 00:39:39.222 ************************************ 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:39.222 * Looking for test storage... 00:39:39.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:39.222 08:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:39.222 08:06:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:39:39.223 08:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:41.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:41.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.126 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:41.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:41.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.127 08:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:41.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:41.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:39:41.127 00:39:41.127 --- 10.0.0.2 ping statistics --- 00:39:41.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.127 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:41.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:41.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:39:41.127 00:39:41.127 --- 10.0.0.1 ping statistics --- 00:39:41.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.127 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:41.127 08:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:42.061 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:42.061 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:42.061 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:43.027 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1275379 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1275379 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1275379 ']' 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:43.284 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.284 [2024-07-15 08:06:34.452446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:43.284 [2024-07-15 08:06:34.452591] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:43.541 EAL: No free 2048 kB hugepages reported on node 1 00:39:43.541 [2024-07-15 08:06:34.605750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:43.798 [2024-07-15 08:06:34.964746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:43.798 [2024-07-15 08:06:34.964827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:43.798 [2024-07-15 08:06:34.964882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:43.798 [2024-07-15 08:06:34.964912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:43.798 [2024-07-15 08:06:34.964941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:43.798 [2024-07-15 08:06:34.965070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.798 [2024-07-15 08:06:34.965144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:43.798 [2024-07-15 08:06:34.965192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.798 [2024-07-15 08:06:34.965202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:44.363 08:06:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:44.363 ************************************ 00:39:44.363 START TEST spdk_target_abort 00:39:44.363 ************************************ 00:39:44.363 08:06:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:44.363 08:06:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:44.363 08:06:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:44.363 08:06:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.363 08:06:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.645 spdk_targetn1 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.645 [2024-07-15 08:06:38.437043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.645 [2024-07-15 08:06:38.483599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:47.645 08:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:47.645 EAL: No free 2048 kB hugepages reported on node 1 00:39:50.928 Initializing NVMe Controllers 00:39:50.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:50.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:50.928 Initialization complete. Launching workers. 00:39:50.928 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9072, failed: 0 00:39:50.928 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7820 00:39:50.928 success 773, unsuccess 479, failed 0 00:39:50.928 08:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:50.928 08:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:50.928 EAL: No free 2048 kB hugepages reported on node 1 00:39:54.206 Initializing NVMe Controllers 00:39:54.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:54.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:54.206 Initialization complete. Launching workers. 00:39:54.206 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8486, failed: 0 00:39:54.206 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7234 00:39:54.206 success 305, unsuccess 947, failed 0 00:39:54.207 08:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:54.207 08:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:54.207 EAL: No free 2048 kB hugepages reported on node 1 00:39:57.485 Initializing NVMe Controllers 00:39:57.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:57.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:57.486 Initialization complete. Launching workers. 00:39:57.486 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27473, failed: 0 00:39:57.486 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2681, failed to submit 24792 00:39:57.486 success 201, unsuccess 2480, failed 0 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.486 08:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1275379 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1275379 ']' 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1275379 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1275379 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1275379' 00:39:58.855 killing process with pid 1275379 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1275379 00:39:58.855 08:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1275379 00:39:59.788 00:39:59.788 real 0m15.496s 00:39:59.788 user 0m59.620s 00:39:59.788 sys 0m2.763s 00:39:59.788 08:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:59.788 08:06:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:59.788 ************************************ 00:39:59.788 END TEST spdk_target_abort 00:39:59.788 ************************************ 00:40:00.046 08:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:00.046 08:06:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:00.046 08:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:00.046 08:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:00.046 08:06:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:00.046 ************************************ 00:40:00.046 START TEST kernel_target_abort 00:40:00.046 ************************************ 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:00.046 08:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:00.980 Waiting for block devices as requested 00:40:01.237 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:40:01.237 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:01.237 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:01.515 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:01.515 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:01.515 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:01.515 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:01.772 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:01.772 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:01.772 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:01.772 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:02.029 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:02.029 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:02.029 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:02.287 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:02.287 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:02.287 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:02.878 No valid GPT data, bailing 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:02.878 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:40:02.879 00:40:02.879 Discovery Log Number of Records 2, Generation counter 2 00:40:02.879 =====Discovery Log Entry 0====== 00:40:02.879 trtype: tcp 00:40:02.879 adrfam: ipv4 00:40:02.879 subtype: current discovery subsystem 00:40:02.879 treq: not specified, sq flow control disable supported 00:40:02.879 portid: 1 00:40:02.879 trsvcid: 4420 00:40:02.879 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:02.879 traddr: 10.0.0.1 00:40:02.879 eflags: none 00:40:02.879 sectype: none 00:40:02.879 =====Discovery Log Entry 1====== 00:40:02.879 trtype: tcp 00:40:02.879 adrfam: ipv4 00:40:02.879 subtype: nvme subsystem 00:40:02.879 treq: not specified, sq flow control disable supported 00:40:02.879 portid: 1 00:40:02.879 trsvcid: 4420 00:40:02.879 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:02.879 traddr: 10.0.0.1 00:40:02.879 eflags: none 00:40:02.879 sectype: none 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:02.879 08:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:02.879 EAL: No free 2048 kB hugepages reported on node 1 00:40:06.166 Initializing NVMe Controllers 00:40:06.166 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:06.166 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:06.166 Initialization complete. Launching workers. 00:40:06.166 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34001, failed: 0 00:40:06.166 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34001, failed to submit 0 00:40:06.166 success 0, unsuccess 34001, failed 0 00:40:06.166 08:06:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:06.166 08:06:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:06.166 EAL: No free 2048 kB hugepages reported on node 1 00:40:09.449 Initializing NVMe Controllers 00:40:09.449 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:09.449 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:09.449 Initialization complete. Launching workers. 00:40:09.449 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55715, failed: 0 00:40:09.449 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14030, failed to submit 41685 00:40:09.449 success 0, unsuccess 14030, failed 0 00:40:09.449 08:07:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:09.449 08:07:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:09.449 EAL: No free 2048 kB hugepages reported on node 1 00:40:12.735 Initializing NVMe Controllers 00:40:12.735 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:12.735 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:12.735 Initialization complete. Launching workers. 00:40:12.735 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54867, failed: 0 00:40:12.735 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13678, failed to submit 41189 00:40:12.735 success 0, unsuccess 13678, failed 0 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:12.735 08:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:13.670 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:13.670 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:13.670 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:14.606 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:40:14.867 00:40:14.867 real 0m14.820s 00:40:14.867 user 0m6.188s 00:40:14.867 sys 0m3.550s 00:40:14.867 08:07:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:14.867 08:07:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.867 ************************************ 00:40:14.867 END TEST kernel_target_abort 00:40:14.867 ************************************ 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:14.867 rmmod nvme_tcp 00:40:14.867 rmmod nvme_fabrics 00:40:14.867 rmmod nvme_keyring 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1275379 ']' 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1275379 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1275379 ']' 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1275379 00:40:14.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1275379) - No such process 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1275379 is not found' 00:40:14.867 Process with pid 1275379 is not found 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:14.867 08:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:15.803 Waiting for block devices as requested 00:40:15.803 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:40:16.061 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:16.061 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:16.318 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:16.318 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:16.318 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:16.318 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:16.575 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:16.575 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:16.575 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:16.575 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:16.575 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:16.834 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:16.834 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:16.834 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:16.834 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:17.095 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:17.095 08:07:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:19.633 08:07:10 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:19.633 00:40:19.633 real 0m40.338s 00:40:19.633 user 1m7.999s 00:40:19.633 sys 0m9.658s 00:40:19.633 08:07:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:19.633 08:07:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:19.633 ************************************ 00:40:19.633 END TEST nvmf_abort_qd_sizes 00:40:19.633 ************************************ 00:40:19.633 08:07:10 -- common/autotest_common.sh@1142 -- # return 0 00:40:19.633 08:07:10 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:19.633 08:07:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:19.633 08:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:19.633 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:40:19.633 ************************************ 00:40:19.633 START TEST keyring_file 00:40:19.633 ************************************ 00:40:19.633 08:07:10 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:19.633 * Looking for test storage... 00:40:19.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:19.633 08:07:10 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:19.633 08:07:10 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:19.633 08:07:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:19.634 08:07:10 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:19.634 08:07:10 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:19.634 08:07:10 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:19.634 08:07:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.634 08:07:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.634 08:07:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.634 08:07:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:19.634 08:07:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@47 -- # : 0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S874YxEnNW 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S874YxEnNW 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S874YxEnNW 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.S874YxEnNW 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yFE4qJsH5X 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:19.634 08:07:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yFE4qJsH5X 00:40:19.634 08:07:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yFE4qJsH5X 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yFE4qJsH5X 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=1281597 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:19.634 08:07:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1281597 00:40:19.634 08:07:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1281597 ']' 00:40:19.634 08:07:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.634 08:07:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:19.634 08:07:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.634 08:07:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:19.634 08:07:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.634 [2024-07-15 08:07:10.584047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:19.634 [2024-07-15 08:07:10.584212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281597 ] 00:40:19.634 EAL: No free 2048 kB hugepages reported on node 1 00:40:19.634 [2024-07-15 08:07:10.718684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.893 [2024-07-15 08:07:10.971449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:20.827 08:07:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.827 [2024-07-15 08:07:11.867741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.827 null0 00:40:20.827 [2024-07-15 08:07:11.899774] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:20.827 [2024-07-15 08:07:11.900358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:20.827 [2024-07-15 08:07:11.907798] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:20.827 08:07:11 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.827 [2024-07-15 08:07:11.915810] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:20.827 request: 00:40:20.827 { 00:40:20.827 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:20.827 "secure_channel": false, 00:40:20.827 "listen_address": { 00:40:20.827 "trtype": "tcp", 00:40:20.827 "traddr": "127.0.0.1", 00:40:20.827 "trsvcid": "4420" 00:40:20.827 }, 00:40:20.827 "method": "nvmf_subsystem_add_listener", 00:40:20.827 "req_id": 1 00:40:20.827 } 00:40:20.827 Got JSON-RPC error response 00:40:20.827 response: 00:40:20.827 { 00:40:20.827 "code": -32602, 00:40:20.827 "message": "Invalid parameters" 00:40:20.827 } 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:20.827 08:07:11 keyring_file -- keyring/file.sh@46 -- # bperfpid=1281737 00:40:20.827 08:07:11 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:20.827 08:07:11 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1281737 /var/tmp/bperf.sock 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1281737 ']' 00:40:20.827 08:07:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:20.828 08:07:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:20.828 08:07:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:20.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:20.828 08:07:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:20.828 08:07:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.828 [2024-07-15 08:07:11.998666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:20.828 [2024-07-15 08:07:11.998818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281737 ] 00:40:21.123 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.123 [2024-07-15 08:07:12.132971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:21.381 [2024-07-15 08:07:12.384963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:21.947 08:07:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:21.947 08:07:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:21.947 08:07:12 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:21.947 08:07:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:21.947 08:07:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yFE4qJsH5X 00:40:21.947 08:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yFE4qJsH5X 00:40:22.203 08:07:13 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:40:22.203 08:07:13 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:40:22.203 08:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.204 08:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.204 08:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.461 08:07:13 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.S874YxEnNW == \/\t\m\p\/\t\m\p\.\S\8\7\4\Y\x\E\n\N\W ]] 00:40:22.461 08:07:13 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:40:22.461 08:07:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:22.461 08:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.461 08:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.461 08:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:22.718 08:07:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yFE4qJsH5X == \/\t\m\p\/\t\m\p\.\y\F\E\4\q\J\s\H\5\X ]] 00:40:22.718 08:07:13 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:40:22.718 08:07:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:22.718 08:07:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.718 08:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.718 08:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.718 08:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.976 08:07:14 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:40:22.976 08:07:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:40:22.976 08:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:22.976 08:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.976 08:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.976 08:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.976 08:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:23.233 08:07:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:23.233 08:07:14 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:23.233 08:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:23.491 [2024-07-15 08:07:14.650452] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:23.748 nvme0n1 00:40:23.748 08:07:14 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:40:23.748 08:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:23.748 08:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:23.748 08:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:23.748 08:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:23.748 08:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:24.006 08:07:15 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:40:24.006 08:07:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:40:24.006 08:07:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:24.006 08:07:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:24.006 08:07:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:24.006 08:07:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:24.006 08:07:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:24.264 08:07:15 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:40:24.264 08:07:15 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:24.264 Running I/O for 1 seconds... 00:40:25.201 00:40:25.201 Latency(us) 00:40:25.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:25.201 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:25.201 nvme0n1 : 1.03 4478.82 17.50 0.00 0.00 28226.64 9514.86 40001.23 00:40:25.201 =================================================================================================================== 00:40:25.201 Total : 4478.82 17.50 0.00 0.00 28226.64 9514.86 40001.23 00:40:25.201 0 00:40:25.201 08:07:16 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:25.201 08:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:25.768 08:07:16 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.768 08:07:16 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:40:25.768 08:07:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.768 08:07:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:26.027 08:07:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:26.027 08:07:17 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.027 08:07:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.027 08:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:26.285 [2024-07-15 08:07:17.429348] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:26.285 [2024-07-15 08:07:17.429609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:40:26.285 [2024-07-15 08:07:17.430584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:26.285 [2024-07-15 08:07:17.431575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:26.285 [2024-07-15 08:07:17.431611] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:26.285 [2024-07-15 08:07:17.431633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:26.285 request: 00:40:26.285 { 00:40:26.285 "name": "nvme0", 00:40:26.285 "trtype": "tcp", 00:40:26.285 "traddr": "127.0.0.1", 00:40:26.285 "adrfam": "ipv4", 00:40:26.285 "trsvcid": "4420", 00:40:26.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:26.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:26.285 "prchk_reftag": false, 00:40:26.285 "prchk_guard": false, 00:40:26.285 "hdgst": false, 00:40:26.285 "ddgst": false, 00:40:26.285 "psk": "key1", 00:40:26.285 "method": "bdev_nvme_attach_controller", 00:40:26.285 "req_id": 1 00:40:26.285 } 00:40:26.285 Got JSON-RPC error response 00:40:26.285 response: 00:40:26.285 { 00:40:26.285 "code": -5, 00:40:26.285 "message": "Input/output error" 00:40:26.285 } 00:40:26.285 08:07:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:26.285 08:07:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:26.285 08:07:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:26.285 08:07:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:26.285 08:07:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:40:26.285 08:07:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:26.285 08:07:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:26.285 08:07:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:26.285 08:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.285 08:07:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:26.543 08:07:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:40:26.543 08:07:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:40:26.543 08:07:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:26.543 08:07:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:26.543 08:07:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:26.543 08:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.543 08:07:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:26.801 08:07:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:26.801 08:07:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:40:26.801 08:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:27.058 08:07:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:40:27.058 08:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:27.316 08:07:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:40:27.316 08:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:27.316 08:07:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:40:27.574 08:07:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:40:27.574 08:07:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.S874YxEnNW 00:40:27.574 08:07:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.574 08:07:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:27.574 08:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:27.831 [2024-07-15 08:07:18.927546] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.S874YxEnNW': 0100660 00:40:27.831 [2024-07-15 08:07:18.927599] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:27.831 request: 00:40:27.831 { 00:40:27.831 "name": "key0", 00:40:27.831 "path": "/tmp/tmp.S874YxEnNW", 00:40:27.831 "method": "keyring_file_add_key", 00:40:27.831 "req_id": 1 00:40:27.831 } 00:40:27.831 Got JSON-RPC error response 00:40:27.831 response: 00:40:27.831 { 00:40:27.831 "code": -1, 00:40:27.831 "message": "Operation not permitted" 00:40:27.831 } 00:40:27.831 08:07:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:27.831 08:07:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.831 08:07:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.831 08:07:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.831 08:07:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.S874YxEnNW 00:40:27.831 08:07:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:27.831 08:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S874YxEnNW 00:40:28.089 08:07:19 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.S874YxEnNW 00:40:28.089 08:07:19 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:40:28.089 08:07:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:28.089 08:07:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:28.089 08:07:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.089 08:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.089 08:07:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:28.347 08:07:19 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:40:28.347 08:07:19 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:28.347 08:07:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.347 08:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.604 [2024-07-15 08:07:19.677724] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.S874YxEnNW': No such file or directory 00:40:28.604 [2024-07-15 08:07:19.677783] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:28.604 [2024-07-15 08:07:19.677819] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:28.604 [2024-07-15 08:07:19.677836] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:28.604 [2024-07-15 08:07:19.677855] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:28.604 request: 00:40:28.604 { 00:40:28.604 "name": "nvme0", 00:40:28.604 "trtype": "tcp", 00:40:28.604 "traddr": "127.0.0.1", 00:40:28.604 "adrfam": "ipv4", 00:40:28.604 "trsvcid": "4420", 00:40:28.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:28.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:28.604 "prchk_reftag": false, 00:40:28.604 "prchk_guard": false, 00:40:28.604 "hdgst": false, 00:40:28.604 "ddgst": false, 00:40:28.604 "psk": "key0", 00:40:28.604 "method": "bdev_nvme_attach_controller", 00:40:28.604 "req_id": 1 00:40:28.604 } 00:40:28.604 Got JSON-RPC error response 00:40:28.604 response: 00:40:28.604 { 00:40:28.604 "code": -19, 00:40:28.604 "message": "No such device" 00:40:28.604 } 00:40:28.604 08:07:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:28.604 08:07:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:28.604 08:07:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:28.604 08:07:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:28.604 08:07:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:40:28.604 08:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:28.862 08:07:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.07NK3slRiB 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:28.862 08:07:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:28.862 08:07:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:28.862 08:07:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:28.862 08:07:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:28.862 08:07:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:28.862 08:07:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.07NK3slRiB 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.07NK3slRiB 00:40:28.862 08:07:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.07NK3slRiB 00:40:28.862 08:07:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.07NK3slRiB 00:40:28.862 08:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.07NK3slRiB 00:40:29.120 08:07:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:29.120 08:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:29.379 nvme0n1 00:40:29.379 08:07:20 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:40:29.379 08:07:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:29.379 08:07:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.379 08:07:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.379 08:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.379 08:07:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.637 08:07:20 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:40:29.637 08:07:20 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:40:29.637 08:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:29.896 08:07:21 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:40:29.896 08:07:21 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:40:29.896 08:07:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.896 08:07:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.896 08:07:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:30.155 08:07:21 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:40:30.155 08:07:21 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:40:30.155 08:07:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:30.155 08:07:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:30.155 08:07:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:30.155 08:07:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:30.155 08:07:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:30.413 08:07:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:40:30.413 08:07:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:30.413 08:07:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:30.671 08:07:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:40:30.671 08:07:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:30.671 08:07:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:40:30.929 08:07:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:40:30.929 08:07:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.07NK3slRiB 00:40:30.929 08:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.07NK3slRiB 00:40:31.188 08:07:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yFE4qJsH5X 00:40:31.188 08:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yFE4qJsH5X 00:40:31.446 08:07:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:31.446 08:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:31.704 nvme0n1 00:40:31.704 08:07:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:40:31.704 08:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:32.270 08:07:23 keyring_file -- keyring/file.sh@112 -- # config='{ 00:40:32.270 "subsystems": [ 00:40:32.270 { 00:40:32.270 "subsystem": "keyring", 00:40:32.270 "config": [ 00:40:32.270 { 00:40:32.270 "method": "keyring_file_add_key", 00:40:32.270 "params": { 00:40:32.270 "name": "key0", 00:40:32.270 "path": "/tmp/tmp.07NK3slRiB" 00:40:32.270 } 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "method": "keyring_file_add_key", 00:40:32.270 "params": { 00:40:32.270 "name": "key1", 00:40:32.270 "path": "/tmp/tmp.yFE4qJsH5X" 00:40:32.270 } 00:40:32.270 } 00:40:32.270 ] 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "subsystem": "iobuf", 00:40:32.270 "config": [ 00:40:32.270 { 00:40:32.270 "method": "iobuf_set_options", 00:40:32.270 "params": { 00:40:32.270 "small_pool_count": 8192, 00:40:32.270 "large_pool_count": 1024, 00:40:32.270 "small_bufsize": 8192, 00:40:32.270 "large_bufsize": 135168 00:40:32.270 } 00:40:32.270 } 00:40:32.270 ] 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "subsystem": "sock", 00:40:32.270 "config": [ 00:40:32.270 { 00:40:32.270 "method": "sock_set_default_impl", 00:40:32.270 "params": { 00:40:32.270 "impl_name": "posix" 00:40:32.270 } 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "method": "sock_impl_set_options", 00:40:32.270 "params": { 00:40:32.270 "impl_name": "ssl", 00:40:32.270 "recv_buf_size": 4096, 00:40:32.270 "send_buf_size": 4096, 00:40:32.270 "enable_recv_pipe": true, 00:40:32.270 "enable_quickack": false, 00:40:32.270 "enable_placement_id": 0, 00:40:32.270 "enable_zerocopy_send_server": true, 00:40:32.270 "enable_zerocopy_send_client": false, 00:40:32.270 "zerocopy_threshold": 0, 00:40:32.270 "tls_version": 0, 00:40:32.270 "enable_ktls": false 00:40:32.270 } 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "method": "sock_impl_set_options", 00:40:32.270 "params": { 00:40:32.270 "impl_name": "posix", 00:40:32.270 "recv_buf_size": 2097152, 00:40:32.270 "send_buf_size": 2097152, 00:40:32.270 "enable_recv_pipe": true, 00:40:32.270 "enable_quickack": false, 00:40:32.270 "enable_placement_id": 0, 00:40:32.270 "enable_zerocopy_send_server": true, 00:40:32.270 "enable_zerocopy_send_client": false, 00:40:32.270 "zerocopy_threshold": 0, 00:40:32.270 "tls_version": 0, 00:40:32.270 "enable_ktls": false 00:40:32.270 } 00:40:32.270 } 00:40:32.270 ] 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "subsystem": "vmd", 00:40:32.270 "config": [] 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "subsystem": "accel", 00:40:32.270 "config": [ 00:40:32.270 { 00:40:32.270 "method": "accel_set_options", 00:40:32.270 "params": { 00:40:32.270 "small_cache_size": 128, 00:40:32.270 "large_cache_size": 16, 00:40:32.270 "task_count": 2048, 00:40:32.270 "sequence_count": 2048, 00:40:32.270 "buf_count": 2048 00:40:32.270 } 00:40:32.270 } 00:40:32.270 ] 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "subsystem": "bdev", 00:40:32.270 "config": [ 00:40:32.270 { 00:40:32.270 "method": "bdev_set_options", 00:40:32.270 "params": { 00:40:32.270 "bdev_io_pool_size": 65535, 00:40:32.270 "bdev_io_cache_size": 256, 00:40:32.270 "bdev_auto_examine": true, 00:40:32.270 "iobuf_small_cache_size": 128, 00:40:32.270 "iobuf_large_cache_size": 16 00:40:32.270 } 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "method": "bdev_raid_set_options", 00:40:32.270 "params": { 00:40:32.270 "process_window_size_kb": 1024 00:40:32.270 } 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "method": "bdev_iscsi_set_options", 00:40:32.270 "params": { 00:40:32.270 "timeout_sec": 30 00:40:32.270 } 00:40:32.270 }, 00:40:32.270 { 00:40:32.270 "method": "bdev_nvme_set_options", 00:40:32.270 "params": { 00:40:32.270 "action_on_timeout": "none", 00:40:32.270 "timeout_us": 0, 00:40:32.270 "timeout_admin_us": 0, 00:40:32.270 "keep_alive_timeout_ms": 10000, 00:40:32.270 "arbitration_burst": 0, 00:40:32.270 "low_priority_weight": 0, 00:40:32.270 "medium_priority_weight": 0, 00:40:32.270 "high_priority_weight": 0, 00:40:32.270 "nvme_adminq_poll_period_us": 10000, 00:40:32.270 "nvme_ioq_poll_period_us": 0, 00:40:32.270 "io_queue_requests": 512, 00:40:32.270 "delay_cmd_submit": true, 00:40:32.270 "transport_retry_count": 4, 00:40:32.270 "bdev_retry_count": 3, 00:40:32.270 "transport_ack_timeout": 0, 00:40:32.270 "ctrlr_loss_timeout_sec": 0, 00:40:32.270 "reconnect_delay_sec": 0, 00:40:32.270 "fast_io_fail_timeout_sec": 0, 00:40:32.270 "disable_auto_failback": false, 00:40:32.270 "generate_uuids": false, 00:40:32.270 "transport_tos": 0, 00:40:32.271 "nvme_error_stat": false, 00:40:32.271 "rdma_srq_size": 0, 00:40:32.271 "io_path_stat": false, 00:40:32.271 "allow_accel_sequence": false, 00:40:32.271 "rdma_max_cq_size": 0, 00:40:32.271 "rdma_cm_event_timeout_ms": 0, 00:40:32.271 "dhchap_digests": [ 00:40:32.271 "sha256", 00:40:32.271 "sha384", 00:40:32.271 "sha512" 00:40:32.271 ], 00:40:32.271 "dhchap_dhgroups": [ 00:40:32.271 "null", 00:40:32.271 "ffdhe2048", 00:40:32.271 "ffdhe3072", 00:40:32.271 "ffdhe4096", 00:40:32.271 "ffdhe6144", 00:40:32.271 "ffdhe8192" 00:40:32.271 ] 00:40:32.271 } 00:40:32.271 }, 00:40:32.271 { 00:40:32.271 "method": "bdev_nvme_attach_controller", 00:40:32.271 "params": { 00:40:32.271 "name": "nvme0", 00:40:32.271 "trtype": "TCP", 00:40:32.271 "adrfam": "IPv4", 00:40:32.271 "traddr": "127.0.0.1", 00:40:32.271 "trsvcid": "4420", 00:40:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:32.271 "prchk_reftag": false, 00:40:32.271 "prchk_guard": false, 00:40:32.271 "ctrlr_loss_timeout_sec": 0, 00:40:32.271 "reconnect_delay_sec": 0, 00:40:32.271 "fast_io_fail_timeout_sec": 0, 00:40:32.271 "psk": "key0", 00:40:32.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:32.271 "hdgst": false, 00:40:32.271 "ddgst": false 00:40:32.271 } 00:40:32.271 }, 00:40:32.271 { 00:40:32.271 "method": "bdev_nvme_set_hotplug", 00:40:32.271 "params": { 00:40:32.271 "period_us": 100000, 00:40:32.271 "enable": false 00:40:32.271 } 00:40:32.271 }, 00:40:32.271 { 00:40:32.271 "method": "bdev_wait_for_examine" 00:40:32.271 } 00:40:32.271 ] 00:40:32.271 }, 00:40:32.271 { 00:40:32.271 "subsystem": "nbd", 00:40:32.271 "config": [] 00:40:32.271 } 00:40:32.271 ] 00:40:32.271 }' 00:40:32.271 08:07:23 keyring_file -- keyring/file.sh@114 -- # killprocess 1281737 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1281737 ']' 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1281737 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1281737 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1281737' 00:40:32.271 killing process with pid 1281737 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@967 -- # kill 1281737 00:40:32.271 Received shutdown signal, test time was about 1.000000 seconds 00:40:32.271 00:40:32.271 Latency(us) 00:40:32.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.271 =================================================================================================================== 00:40:32.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:32.271 08:07:23 keyring_file -- common/autotest_common.sh@972 -- # wait 1281737 00:40:33.203 08:07:24 keyring_file -- keyring/file.sh@117 -- # bperfpid=1283332 00:40:33.203 08:07:24 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1283332 /var/tmp/bperf.sock 00:40:33.203 08:07:24 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1283332 ']' 00:40:33.203 08:07:24 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:40:33.203 "subsystems": [ 00:40:33.203 { 00:40:33.203 "subsystem": "keyring", 00:40:33.203 "config": [ 00:40:33.203 { 00:40:33.203 "method": "keyring_file_add_key", 00:40:33.203 "params": { 00:40:33.203 "name": "key0", 00:40:33.203 "path": "/tmp/tmp.07NK3slRiB" 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "keyring_file_add_key", 00:40:33.203 "params": { 00:40:33.203 "name": "key1", 00:40:33.203 "path": "/tmp/tmp.yFE4qJsH5X" 00:40:33.203 } 00:40:33.203 } 00:40:33.203 ] 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "subsystem": "iobuf", 00:40:33.203 "config": [ 00:40:33.203 { 00:40:33.203 "method": "iobuf_set_options", 00:40:33.203 "params": { 00:40:33.203 "small_pool_count": 8192, 00:40:33.203 "large_pool_count": 1024, 00:40:33.203 "small_bufsize": 8192, 00:40:33.203 "large_bufsize": 135168 00:40:33.203 } 00:40:33.203 } 00:40:33.203 ] 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "subsystem": "sock", 00:40:33.203 "config": [ 00:40:33.203 { 00:40:33.203 "method": "sock_set_default_impl", 00:40:33.203 "params": { 00:40:33.203 "impl_name": "posix" 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "sock_impl_set_options", 00:40:33.203 "params": { 00:40:33.203 "impl_name": "ssl", 00:40:33.203 "recv_buf_size": 4096, 00:40:33.203 "send_buf_size": 4096, 00:40:33.203 "enable_recv_pipe": true, 00:40:33.203 "enable_quickack": false, 00:40:33.203 "enable_placement_id": 0, 00:40:33.203 "enable_zerocopy_send_server": true, 00:40:33.203 "enable_zerocopy_send_client": false, 00:40:33.203 "zerocopy_threshold": 0, 00:40:33.203 "tls_version": 0, 00:40:33.203 "enable_ktls": false 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "sock_impl_set_options", 00:40:33.203 "params": { 00:40:33.203 "impl_name": "posix", 00:40:33.203 "recv_buf_size": 2097152, 00:40:33.203 "send_buf_size": 2097152, 00:40:33.203 "enable_recv_pipe": true, 00:40:33.203 "enable_quickack": false, 00:40:33.203 "enable_placement_id": 0, 00:40:33.203 "enable_zerocopy_send_server": true, 00:40:33.203 "enable_zerocopy_send_client": false, 00:40:33.203 "zerocopy_threshold": 0, 00:40:33.203 "tls_version": 0, 00:40:33.203 "enable_ktls": false 00:40:33.203 } 00:40:33.203 } 00:40:33.203 ] 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "subsystem": "vmd", 00:40:33.203 "config": [] 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "subsystem": "accel", 00:40:33.203 "config": [ 00:40:33.203 { 00:40:33.203 "method": "accel_set_options", 00:40:33.203 "params": { 00:40:33.203 "small_cache_size": 128, 00:40:33.203 "large_cache_size": 16, 00:40:33.203 "task_count": 2048, 00:40:33.203 "sequence_count": 2048, 00:40:33.203 "buf_count": 2048 00:40:33.203 } 00:40:33.203 } 00:40:33.203 ] 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "subsystem": "bdev", 00:40:33.203 "config": [ 00:40:33.203 { 00:40:33.203 "method": "bdev_set_options", 00:40:33.203 "params": { 00:40:33.203 "bdev_io_pool_size": 65535, 00:40:33.203 "bdev_io_cache_size": 256, 00:40:33.203 "bdev_auto_examine": true, 00:40:33.203 "iobuf_small_cache_size": 128, 00:40:33.203 "iobuf_large_cache_size": 16 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "bdev_raid_set_options", 00:40:33.203 "params": { 00:40:33.203 "process_window_size_kb": 1024 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "bdev_iscsi_set_options", 00:40:33.203 "params": { 00:40:33.203 "timeout_sec": 30 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "bdev_nvme_set_options", 00:40:33.203 "params": { 00:40:33.203 "action_on_timeout": "none", 00:40:33.203 "timeout_us": 0, 00:40:33.203 "timeout_admin_us": 0, 00:40:33.203 "keep_alive_timeout_ms": 10000, 00:40:33.203 "arbitration_burst": 0, 00:40:33.203 "low_priority_weight": 0, 00:40:33.203 "medium_priority_weight": 0, 00:40:33.203 "high_priority_weight": 0, 00:40:33.203 "nvme_adminq_poll_period_us": 10000, 00:40:33.203 "nvme_ioq_poll_period_us": 0, 00:40:33.203 "io_queue_requests": 512, 00:40:33.203 "delay_cmd_submit": true, 00:40:33.203 "transport_retry_count": 4, 00:40:33.203 "bdev_retry_count": 3, 00:40:33.203 "transport_ack_timeout": 0, 00:40:33.203 "ctrlr_loss_timeout_sec": 0, 00:40:33.203 "reconnect_delay_sec": 0, 00:40:33.203 "fast_io_fail_timeout_sec": 0, 00:40:33.203 "disable_auto_failback": false, 00:40:33.203 "generate_uuids": false, 00:40:33.203 "transport_tos": 0, 00:40:33.203 "nvme_error_stat": false, 00:40:33.203 "rdma_srq_size": 0, 00:40:33.203 "io_path_stat": false, 00:40:33.203 "allow_accel_sequence": false, 00:40:33.203 "rdma_max_cq_size": 0, 00:40:33.203 "rdma_cm_event_timeout_ms": 0, 00:40:33.203 "dhchap_digests": [ 00:40:33.203 "sha256", 00:40:33.203 "sha384", 00:40:33.203 "sha512" 00:40:33.203 ], 00:40:33.203 "dhchap_dhgroups": [ 00:40:33.203 "null", 00:40:33.203 "ffdhe2048", 00:40:33.203 "ffdhe3072", 00:40:33.203 "ffdhe4096", 00:40:33.203 "ffdhe6144", 00:40:33.203 "ffdhe8192" 00:40:33.203 ] 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "bdev_nvme_attach_controller", 00:40:33.203 "params": { 00:40:33.203 "name": "nvme0", 00:40:33.203 "trtype": "TCP", 00:40:33.203 "adrfam": "IPv4", 00:40:33.203 "traddr": "127.0.0.1", 00:40:33.203 "trsvcid": "4420", 00:40:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:33.203 "prchk_reftag": false, 00:40:33.203 "prchk_guard": false, 00:40:33.203 "ctrlr_loss_timeout_sec": 0, 00:40:33.203 "reconnect_delay_sec": 0, 00:40:33.203 "fast_io_fail_timeout_sec": 0, 00:40:33.203 "psk": "key0", 00:40:33.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:33.203 "hdgst": false, 00:40:33.203 "ddgst": false 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "bdev_nvme_set_hotplug", 00:40:33.203 "params": { 00:40:33.203 "period_us": 100000, 00:40:33.203 "enable": false 00:40:33.203 } 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "method": "bdev_wait_for_examine" 00:40:33.203 } 00:40:33.203 ] 00:40:33.203 }, 00:40:33.203 { 00:40:33.203 "subsystem": "nbd", 00:40:33.203 "config": [] 00:40:33.203 } 00:40:33.203 ] 00:40:33.203 }' 00:40:33.204 08:07:24 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:33.204 08:07:24 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:33.204 08:07:24 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:33.204 08:07:24 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:33.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:33.204 08:07:24 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:33.204 08:07:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:33.204 [2024-07-15 08:07:24.371249] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:33.204 [2024-07-15 08:07:24.371403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283332 ] 00:40:33.462 EAL: No free 2048 kB hugepages reported on node 1 00:40:33.462 [2024-07-15 08:07:24.503778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.721 [2024-07-15 08:07:24.756223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:34.025 [2024-07-15 08:07:25.198520] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:34.284 08:07:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:34.284 08:07:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:34.284 08:07:25 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:40:34.284 08:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.284 08:07:25 keyring_file -- keyring/file.sh@120 -- # jq length 00:40:34.542 08:07:25 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:34.542 08:07:25 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:34.542 08:07:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:34.542 08:07:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:34.542 08:07:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:34.542 08:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.542 08:07:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:34.800 08:07:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:34.800 08:07:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:34.800 08:07:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:34.800 08:07:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:34.800 08:07:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:34.800 08:07:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.800 08:07:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:35.057 08:07:26 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:35.057 08:07:26 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:35.057 08:07:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:35.057 08:07:26 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:35.317 08:07:26 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:35.317 08:07:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:35.317 08:07:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.07NK3slRiB /tmp/tmp.yFE4qJsH5X 00:40:35.317 08:07:26 keyring_file -- keyring/file.sh@20 -- # killprocess 1283332 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1283332 ']' 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1283332 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1283332 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1283332' 00:40:35.317 killing process with pid 1283332 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@967 -- # kill 1283332 00:40:35.317 Received shutdown signal, test time was about 1.000000 seconds 00:40:35.317 00:40:35.317 Latency(us) 00:40:35.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.317 =================================================================================================================== 00:40:35.317 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:35.317 08:07:26 keyring_file -- common/autotest_common.sh@972 -- # wait 1283332 00:40:36.253 08:07:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1281597 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1281597 ']' 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1281597 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1281597 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1281597' 00:40:36.253 killing process with pid 1281597 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@967 -- # kill 1281597 00:40:36.253 [2024-07-15 08:07:27.428592] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:36.253 08:07:27 keyring_file -- common/autotest_common.sh@972 -- # wait 1281597 00:40:38.784 00:40:38.784 real 0m19.505s 00:40:38.784 user 0m43.035s 00:40:38.784 sys 0m3.675s 00:40:38.784 08:07:29 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:38.784 08:07:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:38.784 ************************************ 00:40:38.784 END TEST keyring_file 00:40:38.784 ************************************ 00:40:38.784 08:07:29 -- common/autotest_common.sh@1142 -- # return 0 00:40:38.784 08:07:29 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:40:38.784 08:07:29 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:38.784 08:07:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:38.784 08:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:38.784 08:07:29 -- common/autotest_common.sh@10 -- # set +x 00:40:38.784 ************************************ 00:40:38.784 START TEST keyring_linux 00:40:38.784 ************************************ 00:40:38.784 08:07:29 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:38.784 * Looking for test storage... 00:40:38.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:38.784 08:07:29 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:38.784 08:07:29 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:38.784 08:07:29 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:38.784 08:07:29 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.784 08:07:29 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.784 08:07:29 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.784 08:07:29 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:38.784 08:07:29 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:38.784 /tmp/:spdk-test:key0 00:40:38.784 08:07:29 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:38.784 08:07:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:38.784 08:07:29 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:39.043 08:07:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:39.043 08:07:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:39.043 /tmp/:spdk-test:key1 00:40:39.043 08:07:30 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1284093 00:40:39.043 08:07:30 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:39.043 08:07:30 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1284093 00:40:39.043 08:07:30 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1284093 ']' 00:40:39.043 08:07:30 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:39.043 08:07:30 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:39.043 08:07:30 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:39.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:39.043 08:07:30 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:39.043 08:07:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.043 [2024-07-15 08:07:30.132187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:39.043 [2024-07-15 08:07:30.132326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284093 ] 00:40:39.043 EAL: No free 2048 kB hugepages reported on node 1 00:40:39.043 [2024-07-15 08:07:30.268839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.302 [2024-07-15 08:07:30.522946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:40.235 08:07:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:40.235 [2024-07-15 08:07:31.387712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.235 null0 00:40:40.235 [2024-07-15 08:07:31.419702] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:40.235 [2024-07-15 08:07:31.420347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.235 08:07:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:40.235 465934942 00:40:40.235 08:07:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:40.235 366318109 00:40:40.235 08:07:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1284234 00:40:40.235 08:07:31 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:40.235 08:07:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1284234 /var/tmp/bperf.sock 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1284234 ']' 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:40.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:40.235 08:07:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:40.495 [2024-07-15 08:07:31.520176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:40.495 [2024-07-15 08:07:31.520342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284234 ] 00:40:40.495 EAL: No free 2048 kB hugepages reported on node 1 00:40:40.495 [2024-07-15 08:07:31.649987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.753 [2024-07-15 08:07:31.903410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.318 08:07:32 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:41.318 08:07:32 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:41.318 08:07:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:41.318 08:07:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:41.576 08:07:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:41.576 08:07:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:42.143 08:07:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:42.143 08:07:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:42.400 [2024-07-15 08:07:33.468060] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:42.400 nvme0n1 00:40:42.400 08:07:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:42.400 08:07:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:42.400 08:07:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:42.400 08:07:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:42.400 08:07:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:42.400 08:07:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:42.657 08:07:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:42.657 08:07:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:42.657 08:07:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:42.657 08:07:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:42.657 08:07:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:42.657 08:07:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:42.657 08:07:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:42.914 08:07:34 keyring_linux -- keyring/linux.sh@25 -- # sn=465934942 00:40:42.914 08:07:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:42.914 08:07:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:42.914 08:07:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 465934942 == \4\6\5\9\3\4\9\4\2 ]] 00:40:42.915 08:07:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 465934942 00:40:42.915 08:07:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:42.915 08:07:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:43.172 Running I/O for 1 seconds... 00:40:44.105 00:40:44.106 Latency(us) 00:40:44.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.106 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:44.106 nvme0n1 : 1.02 4522.41 17.67 0.00 0.00 28030.15 7621.59 38059.43 00:40:44.106 =================================================================================================================== 00:40:44.106 Total : 4522.41 17.67 0.00 0.00 28030.15 7621.59 38059.43 00:40:44.106 0 00:40:44.106 08:07:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:44.106 08:07:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:44.364 08:07:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:44.364 08:07:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:44.364 08:07:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:44.364 08:07:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:44.364 08:07:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:44.364 08:07:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:44.622 08:07:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:44.622 08:07:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:44.622 08:07:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:44.622 08:07:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:44.622 08:07:35 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:44.622 08:07:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:44.881 [2024-07-15 08:07:35.935419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:44.881 [2024-07-15 08:07:35.935905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:40:44.881 [2024-07-15 08:07:35.936866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:40:44.881 [2024-07-15 08:07:35.937858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:44.881 [2024-07-15 08:07:35.937916] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:44.881 [2024-07-15 08:07:35.937953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:44.881 request: 00:40:44.881 { 00:40:44.881 "name": "nvme0", 00:40:44.881 "trtype": "tcp", 00:40:44.881 "traddr": "127.0.0.1", 00:40:44.881 "adrfam": "ipv4", 00:40:44.881 "trsvcid": "4420", 00:40:44.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:44.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:44.881 "prchk_reftag": false, 00:40:44.881 "prchk_guard": false, 00:40:44.881 "hdgst": false, 00:40:44.881 "ddgst": false, 00:40:44.881 "psk": ":spdk-test:key1", 00:40:44.881 "method": "bdev_nvme_attach_controller", 00:40:44.881 "req_id": 1 00:40:44.881 } 00:40:44.881 Got JSON-RPC error response 00:40:44.881 response: 00:40:44.881 { 00:40:44.881 "code": -5, 00:40:44.881 "message": "Input/output error" 00:40:44.881 } 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@33 -- # sn=465934942 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 465934942 00:40:44.881 1 links removed 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@33 -- # sn=366318109 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 366318109 00:40:44.881 1 links removed 00:40:44.881 08:07:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1284234 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1284234 ']' 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1284234 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:44.881 08:07:35 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1284234 00:40:44.881 08:07:36 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:44.881 08:07:36 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:44.881 08:07:36 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1284234' 00:40:44.881 killing process with pid 1284234 00:40:44.882 08:07:36 keyring_linux -- common/autotest_common.sh@967 -- # kill 1284234 00:40:44.882 Received shutdown signal, test time was about 1.000000 seconds 00:40:44.882 00:40:44.882 Latency(us) 00:40:44.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.882 =================================================================================================================== 00:40:44.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:44.882 08:07:36 keyring_linux -- common/autotest_common.sh@972 -- # wait 1284234 00:40:45.819 08:07:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1284093 00:40:45.819 08:07:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1284093 ']' 00:40:45.819 08:07:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1284093 00:40:45.819 08:07:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1284093 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1284093' 00:40:46.077 killing process with pid 1284093 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 1284093 00:40:46.077 08:07:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 1284093 00:40:48.612 00:40:48.612 real 0m9.636s 00:40:48.612 user 0m15.855s 00:40:48.612 sys 0m1.942s 00:40:48.612 08:07:39 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:48.612 08:07:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:48.612 ************************************ 00:40:48.612 END TEST keyring_linux 00:40:48.612 ************************************ 00:40:48.612 08:07:39 -- common/autotest_common.sh@1142 -- # return 0 00:40:48.612 08:07:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:48.612 08:07:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:48.612 08:07:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:48.612 08:07:39 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:48.612 08:07:39 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:48.612 08:07:39 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:48.612 08:07:39 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:48.612 08:07:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:48.612 08:07:39 -- common/autotest_common.sh@10 -- # set +x 00:40:48.612 08:07:39 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:48.612 08:07:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:48.612 08:07:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:48.612 08:07:39 -- common/autotest_common.sh@10 -- # set +x 00:40:50.029 INFO: APP EXITING 00:40:50.029 INFO: killing all VMs 00:40:50.029 INFO: killing vhost app 00:40:50.029 INFO: EXIT DONE 00:40:51.404 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:51.404 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:51.404 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:51.404 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:51.404 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:51.404 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:51.405 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:51.405 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:51.405 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:51.405 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:51.405 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:51.405 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:51.405 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:51.405 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:51.405 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:51.405 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:51.405 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:52.779 Cleaning 00:40:52.779 Removing: /var/run/dpdk/spdk0/config 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:52.779 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:52.779 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:52.779 Removing: /var/run/dpdk/spdk1/config 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:52.779 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:52.779 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:52.779 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:52.779 Removing: /var/run/dpdk/spdk2/config 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:52.779 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:52.779 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:52.779 Removing: /var/run/dpdk/spdk3/config 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:52.779 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:52.779 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:52.779 Removing: /var/run/dpdk/spdk4/config 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:52.779 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:52.779 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:52.779 Removing: /dev/shm/bdev_svc_trace.1 00:40:52.779 Removing: /dev/shm/nvmf_trace.0 00:40:52.779 Removing: /dev/shm/spdk_tgt_trace.pid934823 00:40:52.779 Removing: /var/run/dpdk/spdk0 00:40:52.779 Removing: /var/run/dpdk/spdk1 00:40:52.779 Removing: /var/run/dpdk/spdk2 00:40:52.779 Removing: /var/run/dpdk/spdk3 00:40:52.779 Removing: /var/run/dpdk/spdk4 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1022499 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1025256 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1033075 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1036498 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1039113 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1039522 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1043637 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1049454 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1049737 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1052638 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1056599 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1058902 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1066582 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1072163 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1073604 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1074403 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1085375 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1087855 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1113926 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1117103 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1118280 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1119728 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1120056 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1120393 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1120655 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1121887 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1123337 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1124712 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1125400 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1127288 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1128113 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1128941 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1131598 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1135251 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1138895 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1163436 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1166348 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1170377 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1171965 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1173711 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1177254 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1179898 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1184507 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1184625 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1187662 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1187803 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1188053 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1188322 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1188336 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1189528 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1190713 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1191888 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1193065 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1194355 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1195536 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1199487 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1199932 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1201213 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1202065 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1206089 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1208760 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1212556 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1216144 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1222744 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1227470 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1227479 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1240170 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1240891 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1241907 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1242552 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1243598 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1244189 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1244806 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1245399 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1248288 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1248559 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1252622 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1252866 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1254656 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1259963 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1259990 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1263107 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1264624 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1266148 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1267010 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1268651 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1270153 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1275802 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1276194 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1276589 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1278473 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1278876 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1279252 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1281597 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1281737 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1283332 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1284093 00:40:52.779 Removing: /var/run/dpdk/spdk_pid1284234 00:40:52.779 Removing: /var/run/dpdk/spdk_pid931952 00:40:53.038 Removing: /var/run/dpdk/spdk_pid933072 00:40:53.038 Removing: /var/run/dpdk/spdk_pid934823 00:40:53.038 Removing: /var/run/dpdk/spdk_pid935544 00:40:53.038 Removing: /var/run/dpdk/spdk_pid936485 00:40:53.038 Removing: /var/run/dpdk/spdk_pid937003 00:40:53.038 Removing: /var/run/dpdk/spdk_pid937895 00:40:53.038 Removing: /var/run/dpdk/spdk_pid938132 00:40:53.038 Removing: /var/run/dpdk/spdk_pid938671 00:40:53.038 Removing: /var/run/dpdk/spdk_pid940127 00:40:53.038 Removing: /var/run/dpdk/spdk_pid941307 00:40:53.038 Removing: /var/run/dpdk/spdk_pid941890 00:40:53.038 Removing: /var/run/dpdk/spdk_pid942386 00:40:53.038 Removing: /var/run/dpdk/spdk_pid942952 00:40:53.038 Removing: /var/run/dpdk/spdk_pid943539 00:40:53.038 Removing: /var/run/dpdk/spdk_pid943706 00:40:53.038 Removing: /var/run/dpdk/spdk_pid944046 00:40:53.038 Removing: /var/run/dpdk/spdk_pid944410 00:40:53.038 Removing: /var/run/dpdk/spdk_pid944866 00:40:53.038 Removing: /var/run/dpdk/spdk_pid948117 00:40:53.038 Removing: /var/run/dpdk/spdk_pid948562 00:40:53.038 Removing: /var/run/dpdk/spdk_pid949115 00:40:53.038 Removing: /var/run/dpdk/spdk_pid949292 00:40:53.038 Removing: /var/run/dpdk/spdk_pid950614 00:40:53.038 Removing: /var/run/dpdk/spdk_pid950762 00:40:53.038 Removing: /var/run/dpdk/spdk_pid952108 00:40:53.038 Removing: /var/run/dpdk/spdk_pid952257 00:40:53.038 Removing: /var/run/dpdk/spdk_pid952704 00:40:53.038 Removing: /var/run/dpdk/spdk_pid952961 00:40:53.038 Removing: /var/run/dpdk/spdk_pid953389 00:40:53.038 Removing: /var/run/dpdk/spdk_pid953542 00:40:53.038 Removing: /var/run/dpdk/spdk_pid954574 00:40:53.038 Removing: /var/run/dpdk/spdk_pid954848 00:40:53.038 Removing: /var/run/dpdk/spdk_pid955060 00:40:53.038 Removing: /var/run/dpdk/spdk_pid955617 00:40:53.038 Removing: /var/run/dpdk/spdk_pid955830 00:40:53.038 Removing: /var/run/dpdk/spdk_pid956104 00:40:53.038 Removing: /var/run/dpdk/spdk_pid956519 00:40:53.038 Removing: /var/run/dpdk/spdk_pid956804 00:40:53.038 Removing: /var/run/dpdk/spdk_pid957130 00:40:53.038 Removing: /var/run/dpdk/spdk_pid957508 00:40:53.038 Removing: /var/run/dpdk/spdk_pid957803 00:40:53.038 Removing: /var/run/dpdk/spdk_pid958206 00:40:53.038 Removing: /var/run/dpdk/spdk_pid958500 00:40:53.038 Removing: /var/run/dpdk/spdk_pid958883 00:40:53.038 Removing: /var/run/dpdk/spdk_pid959196 00:40:53.038 Removing: /var/run/dpdk/spdk_pid959494 00:40:53.038 Removing: /var/run/dpdk/spdk_pid959901 00:40:53.038 Removing: /var/run/dpdk/spdk_pid960197 00:40:53.038 Removing: /var/run/dpdk/spdk_pid960596 00:40:53.038 Removing: /var/run/dpdk/spdk_pid960895 00:40:53.038 Removing: /var/run/dpdk/spdk_pid961188 00:40:53.038 Removing: /var/run/dpdk/spdk_pid961589 00:40:53.038 Removing: /var/run/dpdk/spdk_pid961888 00:40:53.038 Removing: /var/run/dpdk/spdk_pid962301 00:40:53.038 Removing: /var/run/dpdk/spdk_pid962598 00:40:53.038 Removing: /var/run/dpdk/spdk_pid962999 00:40:53.038 Removing: /var/run/dpdk/spdk_pid963274 00:40:53.038 Removing: /var/run/dpdk/spdk_pid963930 00:40:53.038 Removing: /var/run/dpdk/spdk_pid966386 00:40:53.038 Clean 00:40:53.038 08:07:44 -- common/autotest_common.sh@1451 -- # return 0 00:40:53.038 08:07:44 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:53.038 08:07:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:53.038 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:40:53.038 08:07:44 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:53.038 08:07:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:53.038 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:40:53.038 08:07:44 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:53.038 08:07:44 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:53.038 08:07:44 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:53.038 08:07:44 -- spdk/autotest.sh@391 -- # hash lcov 00:40:53.038 08:07:44 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:53.038 08:07:44 -- spdk/autotest.sh@393 -- # hostname 00:40:53.038 08:07:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:53.296 geninfo: WARNING: invalid characters removed from testname! 00:41:25.379 08:08:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:25.379 08:08:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:27.279 08:08:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:30.560 08:08:21 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:33.084 08:08:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:36.394 08:08:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:38.919 08:08:29 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:38.919 08:08:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:38.919 08:08:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:38.919 08:08:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.919 08:08:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.919 08:08:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.920 08:08:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.920 08:08:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.920 08:08:29 -- paths/export.sh@5 -- $ export PATH 00:41:38.920 08:08:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.920 08:08:29 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:41:38.920 08:08:29 -- common/autobuild_common.sh@444 -- $ date +%s 00:41:38.920 08:08:29 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721023709.XXXXXX 00:41:38.920 08:08:29 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721023709.T0hyyk 00:41:38.920 08:08:29 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:41:38.920 08:08:29 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:41:38.920 08:08:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:41:38.920 08:08:29 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:41:38.920 08:08:29 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:41:38.920 08:08:29 -- common/autobuild_common.sh@460 -- $ get_config_params 00:41:38.920 08:08:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:41:38.920 08:08:29 -- common/autotest_common.sh@10 -- $ set +x 00:41:38.920 08:08:29 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:41:38.920 08:08:29 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:41:38.920 08:08:29 -- pm/common@17 -- $ local monitor 00:41:38.920 08:08:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:38.920 08:08:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:38.920 08:08:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:38.920 08:08:29 -- pm/common@21 -- $ date +%s 00:41:38.920 08:08:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:38.920 08:08:29 -- pm/common@21 -- $ date +%s 00:41:38.920 08:08:29 -- pm/common@25 -- $ sleep 1 00:41:38.920 08:08:29 -- pm/common@21 -- $ date +%s 00:41:38.920 08:08:29 -- pm/common@21 -- $ date +%s 00:41:38.920 08:08:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023709 00:41:38.920 08:08:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023709 00:41:38.920 08:08:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023709 00:41:38.920 08:08:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023709 00:41:38.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023709_collect-vmstat.pm.log 00:41:38.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023709_collect-cpu-load.pm.log 00:41:38.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023709_collect-cpu-temp.pm.log 00:41:38.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023709_collect-bmc-pm.bmc.pm.log 00:41:39.859 08:08:30 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:41:39.859 08:08:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:41:39.859 08:08:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:39.859 08:08:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:39.859 08:08:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:41:39.859 08:08:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:39.859 08:08:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:39.859 08:08:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:39.859 08:08:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:39.859 08:08:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:39.859 08:08:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:39.859 08:08:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:39.859 08:08:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:39.859 08:08:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:39.859 08:08:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:39.859 08:08:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:41:39.859 08:08:30 -- pm/common@44 -- $ pid=1296694 00:41:39.859 08:08:30 -- pm/common@50 -- $ kill -TERM 1296694 00:41:39.859 08:08:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:39.859 08:08:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:41:39.859 08:08:30 -- pm/common@44 -- $ pid=1296696 00:41:39.859 08:08:30 -- pm/common@50 -- $ kill -TERM 1296696 00:41:39.859 08:08:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:39.859 08:08:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:41:39.859 08:08:30 -- pm/common@44 -- $ pid=1296698 00:41:39.859 08:08:30 -- pm/common@50 -- $ kill -TERM 1296698 00:41:39.859 08:08:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:39.859 08:08:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:41:39.859 08:08:30 -- pm/common@44 -- $ pid=1296727 00:41:39.859 08:08:30 -- pm/common@50 -- $ sudo -E kill -TERM 1296727 00:41:39.859 + [[ -n 845130 ]] 00:41:39.859 + sudo kill 845130 00:41:39.918 [Pipeline] } 00:41:39.938 [Pipeline] // stage 00:41:39.945 [Pipeline] } 00:41:39.962 [Pipeline] // timeout 00:41:39.967 [Pipeline] } 00:41:39.980 [Pipeline] // catchError 00:41:39.985 [Pipeline] } 00:41:39.998 [Pipeline] // wrap 00:41:40.005 [Pipeline] } 00:41:40.023 [Pipeline] // catchError 00:41:40.032 [Pipeline] stage 00:41:40.034 [Pipeline] { (Epilogue) 00:41:40.048 [Pipeline] catchError 00:41:40.050 [Pipeline] { 00:41:40.063 [Pipeline] echo 00:41:40.065 Cleanup processes 00:41:40.070 [Pipeline] sh 00:41:40.354 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:40.354 1296861 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:41:40.354 1296960 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:40.372 [Pipeline] sh 00:41:40.658 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:40.658 ++ grep -v 'sudo pgrep' 00:41:40.658 ++ awk '{print $1}' 00:41:40.658 + sudo kill -9 1296861 00:41:40.671 [Pipeline] sh 00:41:40.956 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:50.934 [Pipeline] sh 00:41:51.222 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:51.222 Artifacts sizes are good 00:41:51.242 [Pipeline] archiveArtifacts 00:41:51.251 Archiving artifacts 00:41:51.471 [Pipeline] sh 00:41:51.757 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:51.837 [Pipeline] cleanWs 00:41:51.850 [WS-CLEANUP] Deleting project workspace... 00:41:51.850 [WS-CLEANUP] Deferred wipeout is used... 00:41:51.859 [WS-CLEANUP] done 00:41:51.861 [Pipeline] } 00:41:51.884 [Pipeline] // catchError 00:41:51.901 [Pipeline] sh 00:41:52.185 + logger -p user.info -t JENKINS-CI 00:41:52.191 [Pipeline] } 00:41:52.205 [Pipeline] // stage 00:41:52.209 [Pipeline] } 00:41:52.222 [Pipeline] // node 00:41:52.227 [Pipeline] End of Pipeline 00:41:52.252 Finished: SUCCESS